Sample records for user views visual

  1. EvolView, an online tool for visualizing, annotating and managing phylogenetic trees.

    PubMed

    Zhang, Huangkai; Gao, Shenghan; Lercher, Martin J; Hu, Songnian; Chen, Wei-Hua

    2012-07-01

    EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html.

  2. EvolView, an online tool for visualizing, annotating and managing phylogenetic trees

    PubMed Central

    Zhang, Huangkai; Gao, Shenghan; Lercher, Martin J.; Hu, Songnian; Chen, Wei-Hua

    2012-01-01

    EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html. PMID:22695796

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pachuilo, Andrew R; Ragan, Eric; Goodall, John R

    Visualization tools can take advantage of multiple coordinated views to support analysis of large, multidimensional data sets. Effective design of such views and layouts can be challenging, but understanding users analysis strategies can inform design improvements. We outline an approach for intelligent design configuration of visualization tools with multiple coordinated views, and we discuss a proposed software framework to support the approach. The proposed software framework could capture and learn from user interaction data to automate new compositions of views and widgets. Such a framework could reduce the time needed for meta analysis of the visualization use and lead tomore » more effective visualization design.« less

  4. LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.

    PubMed

    Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin

    2014-12-01

    The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.

  5. HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization

    DTIC Science & Technology

    2013-01-01

    user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,

  6. Visualizing Rank Time Series of Wikipedia Top-Viewed Pages.

    PubMed

    Xia, Jing; Hou, Yumeng; Chen, Yingjie Victor; Qian, Zhenyu Cheryl; Ebert, David S; Chen, Wei

    2017-01-01

    Visual clutter is a common challenge when visualizing large rank time series data. WikiTopReader, a reader of Wikipedia page rank, lets users explore connections among top-viewed pages by connecting page-rank behaviors with page-link relations. Such a combination enhances the unweighted Wikipedia page-link network and focuses attention on the page of interest. A set of user evaluations shows that the system effectively represents evolving ranking patterns and page-wise correlation.

  7. Dynamics of backlight luminance for using smartphone in dark environment

    NASA Astrophysics Data System (ADS)

    Na, Nooree; Jang, Jiho; Suk, Hyeon-Jeong

    2014-02-01

    This study developed dynamic backlight luminance, which gradually changes as time passes for comfortable use of a smartphone display in a dark environment. The study was carried out in two stages. In the first stage, a user test was conducted to identify the optimal luminance by assessing the facial squint level, subjective glare evaluation, eye blink frequency and users' subjective preferences. Based on the results of the user test, the dynamics of backlight luminance was designed. It has two levels of luminance: the optimal level for initial viewing to avoid sudden glare or fatigue to users' eyes, and the optimal level for constant viewing, which is comfortable, but also bright enough for constant reading of the displayed material. The luminance for initial viewing starts from 10 cd/m2, and it gradually increases to 40 cd/m2 for users' visual comfort at constant viewing for 20 seconds; In the second stage, a validation test on dynamics of backlight luminance was conducted to verify the effectiveness of the developed dynamics. It involving users' subjective preferences, eye blink frequency, and brainwave analysis using the electroencephalogram (EEG) to confirm that the proposed dynamic backlighting enhances users' visual comfort and visual cognition, particularly for using smartphones in a dark environment.

  8. VISTILES: Coordinating and Combining Co-located Mobile Devices for Visual Data Exploration.

    PubMed

    Langner, Ricardo; Horak, Tom; Dachselt, Raimund

    2017-08-29

    We present VISTILES, a conceptual framework that uses a set of mobile devices to distribute and coordinate visualization views for the exploration of multivariate data. In contrast to desktop-based interfaces for information visualization, mobile devices offer the potential to provide a dynamic and user-defined interface supporting co-located collaborative data exploration with different individual workflows. As part of our framework, we contribute concepts that enable users to interact with coordinated & multiple views (CMV) that are distributed across several mobile devices. The major components of the framework are: (i) dynamic and flexible layouts for CMV focusing on the distribution of views and (ii) an interaction concept for smart adaptations and combinations of visualizations utilizing explicit side-by-side arrangements of devices. As a result, users can benefit from the possibility to combine devices and organize them in meaningful spatial layouts. Furthermore, we present a web-based prototype implementation as a specific instance of our concepts. This implementation provides a practical application case enabling users to explore a multivariate data collection. We also illustrate the design process including feedback from a preliminary user study, which informed the design of both the concepts and the final prototype.

  9. bioWidgets: data interaction components for genomics.

    PubMed

    Fischer, S; Crabtree, J; Brunk, B; Gibson, M; Overton, G C

    1999-10-01

    The presentation of genomics data in a perspicuous visual format is critical for its rapid interpretation and validation. Relatively few public database developers have the resources to implement sophisticated front-end user interfaces themselves. Accordingly, these developers would benefit from a reusable toolkit of user interface and data visualization components. We have designed the bioWidget toolkit as a set of JavaBean components. It includes a wide array of user interface components and defines an architecture for assembling applications. The toolkit is founded on established software engineering design patterns and principles, including componentry, Model-View-Controller, factored models and schema neutrality. As a proof of concept, we have used the bioWidget toolkit to create three extendible applications: AnnotView, BlastView and AlignView.

  10. Immersive viewing engine

    NASA Astrophysics Data System (ADS)

    Schonlau, William J.

    2006-05-01

    An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.

  11. A New Definition for Ground Control

    NASA Technical Reports Server (NTRS)

    2002-01-01

    LandForm(R) VisualFlight(R) blends the power of a geographic information system with the speed of a flight simulator to transform a user's desktop computer into a "virtual cockpit." The software product, which is fully compatible with all Microsoft(R) Windows(R) operating systems, provides distributed, real-time three-dimensional flight visualization over a host of networks. From a desktop, a user can immediately obtain a cockpit view, a chase-plane view, or an airborne tracker view. A customizable display also allows the user to overlay various flight parameters, including latitude, longitude, altitude, pitch, roll, and heading information. Rapid Imaging Software sought assistance from NASA, and the VisualFlight technology came to fruition under a Phase II SBIR contract with Johnson Space Center in 1998. Three years later, on December 13, 2001, Ken Ham successfully flew NASA's X-38 spacecraft from a remote, ground-based cockpit using LandForm VisualFlight as part of his primary situation awareness display in a flight test at Edwards Air Force Base, California.

  12. The Last Meter: Blind Visual Guidance to a Target.

    PubMed

    Manduchi, Roberto; Coughlan, James M

    2014-01-01

    Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.

  13. Coastal On-line Assessment and Synthesis Tool 2.0

    NASA Technical Reports Server (NTRS)

    Brown, Richard; Navard, Andrew; Nguyen, Beth

    2011-01-01

    COAST (Coastal On-line Assessment and Synthesis Tool) is a 3D, open-source Earth data browser developed by leveraging and enhancing previous NASA open-source tools. These tools use satellite imagery and elevation data in a way that allows any user to zoom from orbit view down into any place on Earth, and enables the user to experience Earth terrain in a visually rich 3D view. The benefits associated with taking advantage of an open-source geo-browser are that it is free, extensible, and offers a worldwide developer community that is available to provide additional development and improvement potential. What makes COAST unique is that it simplifies the process of locating and accessing data sources, and allows a user to combine them into a multi-layered and/or multi-temporal visual analytical look into possible data interrelationships and coeffectors for coastal environment phenomenology. COAST provides users with new data visual analytic capabilities. COAST has been upgraded to maximize use of open-source data access, viewing, and data manipulation software tools. The COAST 2.0 toolset has been developed to increase access to a larger realm of the most commonly implemented data formats used by the coastal science community. New and enhanced functionalities that upgrade COAST to COAST 2.0 include the development of the Temporal Visualization Tool (TVT) plug-in, the Recursive Online Remote Data-Data Mapper (RECORD-DM) utility, the Import Data Tool (IDT), and the Add Points Tool (APT). With these improvements, users can integrate their own data with other data sources, and visualize the resulting layers of different data types (such as spatial and spectral, for simultaneous visual analysis), and visualize temporal changes in areas of interest.

  14. Development of a Simple Image Processing Application that Makes Abdominopelvic Tumor Visible on Positron Emission Tomography/Computed Tomography Image.

    PubMed

    Pandey, Anil Kumar; Saroha, Kartik; Sharma, Param Dev; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    In this study, we have developed a simple image processing application in MATLAB that uses suprathreshold stochastic resonance (SSR) and helps the user to visualize abdominopelvic tumor on the exported prediuretic positron emission tomography/computed tomography (PET/CT) images. A brainstorming session was conducted for requirement analysis for the program. It was decided that program should load the screen captured PET/CT images and then produces output images in a window with a slider control that should enable the user to view the best image that visualizes the tumor, if present. The program was implemented on personal computer using Microsoft Windows and MATLAB R2013b. The program has option for the user to select the input image. For the selected image, it displays output images generated using SSR in a separate window having a slider control. The slider control enables the user to view images and select one which seems to provide the best visualization of the area(s) of interest. The developed application enables the user to select, process, and view output images in the process of utilizing SSR to detect the presence of abdominopelvic tumor on prediuretic PET/CT image.

  15. Usability of stereoscopic view in teleoperation

    NASA Astrophysics Data System (ADS)

    Boonsuk, Wutthigrai

    2015-03-01

    Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.

  16. The shaping of information by visual metaphors.

    PubMed

    Ziemkiewicz, Caroline; Kosara, Robert

    2008-01-01

    The nature of an information visualization can be considered to lie in the visual metaphors it uses to structure information. The process of understanding a visualization therefore involves an interaction between these external visual metaphors and the user's internal knowledge representations. To investigate this claim, we conducted an experiment to test the effects of visual metaphor and verbal metaphor on the understanding of tree visualizations. Participants answered simple data comprehension questions while viewing either a treemap or a node-link diagram. Questions were worded to reflect a verbal metaphor that was either compatible or incompatible with the visualization a participant was using. The results (based on correctness and response time) suggest that the visual metaphor indeed affects how a user derives information from a visualization. Additionally, we found that the degree to which a user is affected by the metaphor is strongly correlated with the user's ability to answer task questions correctly. These findings are a first step towards illuminating how visual metaphors shape user understanding, and have significant implications for the evaluation, application, and theory of visualization.

  17. FLIP for FLAG model visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wooten, Hasani Omar

    A graphical user interface has been developed for FLAG users. FLIP (FLAG Input deck Parser) provides users with an organized view of FLAG models and a means for efficiently and easily navigating and editing nodes, parameters, and variables.

  18. High-level user interfaces for transfer function design with semantics.

    PubMed

    Salama, Christof Rezk; Keller, Maik; Kohlmann, Peter

    2006-01-01

    Many sophisticated techniques for the visualization of volumetric data such as medical data have been published. While existing techniques are mature from a technical point of view, managing the complexity of visual parameters is still difficult for non-expert users. To this end, this paper presents new ideas to facilitate the specification of optical properties for direct volume rendering. We introduce an additional level of abstraction for parametric models of transfer functions. The proposed framework allows visualization experts to design high-level transfer function models which can intuitively be used by non-expert users. The results are user interfaces which provide semantic information for specialized visualization problems. The proposed method is based on principal component analysis as well as on concepts borrowed from computer animation.

  19. Visualizing UAS-collected imagery using augmented reality

    NASA Astrophysics Data System (ADS)

    Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.

    2017-05-01

    One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.

  20. User-centered evaluation of Arizona BioPathway: an information extraction, integration, and visualization system.

    PubMed

    Quiñones, Karin D; Su, Hua; Marshall, Byron; Eggers, Shauna; Chen, Hsinchun

    2007-09-01

    Explosive growth in biomedical research has made automated information extraction, knowledge integration, and visualization increasingly important and critically needed. The Arizona BioPathway (ABP) system extracts and displays biological regulatory pathway information from the abstracts of journal articles. This study uses relations extracted from more than 200 PubMed abstracts presented in a tabular and graphical user interface with built-in search and aggregation functionality. This paper presents a task-centered assessment of the usefulness and usability of the ABP system focusing on its relation aggregation and visualization functionalities. Results suggest that our graph-based visualization is more efficient in supporting pathway analysis tasks and is perceived as more useful and easier to use as compared to a text-based literature-viewing method. Relation aggregation significantly contributes to knowledge-acquisition efficiency. Together, the graphic and tabular views in the ABP Visualizer provide a flexible and effective interface for pathway relation browsing and analysis. Our study contributes to pathway-related research and biological information extraction by assessing the value of a multiview, relation-based interface that supports user-controlled exploration of pathway information across multiple granularities.

  1. The Latest Earth and Space Data Visualizations Are Used to Engage Learners Around the World through Diverse Educational Platforms with NOAA's Publicly Available Catalogs from Science On a Sphere and NOAA View and State-of-the Art Display Technology

    NASA Astrophysics Data System (ADS)

    McDougall, C.; Peddicord, H.; Russell, E. L.; Hackathorn, E. J.; Pisut, D.; MacIntosh, E.

    2016-12-01

    NOAA's data visualization education and technology platforms, Science On a Sphere and NOAA View, are providing content for innovative and diverse educational platforms worldwide. Science On a Sphere (SOS) is a system composed of a large-scale spherical display and a curated data catalog. SOS displays are on exhibit in more than 140 locations in 26 countries and 29 US states that reach at least 35 million people every year. Additionally, the continuously updated data catalog, consisting of over 500 visualizations accompanied by descriptions, videos, and related content, is publicly available for download. This catalog is used by a wide variety of users including planetariums, other spherical displays, and teachers. To further broaden the impact of SOS, SOS Explorer, a flat screen version of SOS that can be used in schools and museums has over 100 of the SOS datasets and enables students and other users dig into the data in ways that aren't possible with SOS. Another resource from NOAA, NOAA View, is an easy-to-use portal to NOAA's vast data archives including historical datasets that go back to 1880 and models for ocean, atmosphere, land, cryosphere, climate and weather. NOAA View provides hundreds of data variables within a single interface, allowing the user to browse, interrogate, and download resources from NOAA's vast archives. And, through story maps, users can see how data can be used to understand our planet and improve our lives. Together, these provide invaluable resources to educators and technology pioneers. Both NOAA View and the SOS data catalog enable educators, students and communicators to easily ingest complex and often, stunning visualizations. The visualizations are available in formats that can be incorporated into a number of different display technologies to maximize their use. Although making the visualizations available to users is a technological hurdle, an equally large hurdle is making them understandable by viewers. In this presentation we will discuss the challenges we've encountered in making these resources useable to educators and education institutions as well as the feedback we've received about the value of these resources.

  2. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  3. Bridging the Host-Network Divide: Survey, Taxonomy, and Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fink, Glenn A.; Duggirala, Vedavyas; Correa, Ricardo

    2007-04-17

    Abstract: "This paper presents a new direction in security awareness tools for system administration--the Host-Network (HoNe) Visualizer. Our requirements for the HoNe Visualizer come from needs system administrators expressed in interviews, from reviewing the literature, and from conducting usability studies with prototypes. We present a tool taxonomy that serves as a framework for our literature review, and we use the taxonomy to show what is missing in the administrator's arsenal. Then we unveil our tool and its supporting infrastructure that we believe will fill the empty niche. We found that most security tools provide either an internal view of amore » host or an external view of traffic on a network. Our interviewees revealed how they must construct a mental end-to-end view from separate tools that individually give an incomplete view, expending valuable time and mental effort. Because of limitations designed into TCP/IP [RFC-791, RFC-793], no tool can effectively correlate host and network data into an end-to-end view without kernel modifications. Currently, no other visualization exists to support end-to-end analysis. But HoNe's infrastructure overcomes TCP/IP's limitations bridging the network and transport layers in the network stack and making end-to-end correlation possible. The capstone is the HoNe Visualizer that amplifies the users' cognitive power and reduces their mental workload by illustrating the correlated data graphically. Users said HoNe would be particularly good for discovering day-zero exploits. Our usability study revealed that users performed better on intrusion detection tasks using our visualization than with tools they were accustomed to using regardless of their experience level."« less

  4. A tool for multi-scale modelling of the renal nephron

    PubMed Central

    Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.

    2011-01-01

    We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210

  5. First Clinical Applications of a High-Definition Three-Dimensional Exoscope in Pediatric Neurosurgery

    PubMed Central

    Munoz-Bendix, Christopher; Beseoglu, Kerim; Steiger, Hans-Jakob; Ahmadi, Sebastian A

    2018-01-01

    The ideal visualization tools in microneurosurgery should provide magnification, illumination, wide fields of view, ergonomics, and unobstructed access to the surgical field. The operative microscope was the predominant innovation in modern neurosurgery. Recently, a high-definition three-dimensional (3D) exoscope was developed. We describe the first applications in pediatric neurosurgery. The VITOM 3D exoscope (Karl Storz GmbH, Tuttlingen, Germany) was used in pediatric microneurosurgical operations, along with an OPMI PENTERO operative microscope (Carl Zeiss AG, Jena, Germany). Experiences were retrospectively evaluated with five-level Likert items regarding ease of preparation, image definition, magnification, illumination, field of view, ergonomics, accessibility of the surgical field, and general user-friendliness. Three operations were performed: supratentorial open biopsy in the supine position, infratentorial brain tumor resection in the park bench position, and myelomeningocele closure in the prone position. While preparation and image definition were rated equal for microscope and exoscope, the microscope’s field of view, illumination, and user-friendliness were considered superior, while the advantages of the exoscope were seen in ergonomics and the accessibility of the surgical field. No complications attributed to visualization mode occurred. In our experience, the VITOM 3D exoscope is an innovative visualization tool with advantages over the microscope in ergonomics and the accessibility of the surgical field. However, improvements were deemed necessary with regard to field of view, illumination, and user-friendliness. While the debate of a “perfect” visualization modality is influenced by personal preference, this novel visualization device has the potential to become a valuable tool in the neurosurgeon’s armamentarium. PMID:29581920

  6. First Clinical Applications of a High-Definition Three-Dimensional Exoscope in Pediatric Neurosurgery.

    PubMed

    Beez, Thomas; Munoz-Bendix, Christopher; Beseoglu, Kerim; Steiger, Hans-Jakob; Ahmadi, Sebastian A

    2018-01-24

    The ideal visualization tools in microneurosurgery should provide magnification, illumination, wide fields of view, ergonomics, and unobstructed access to the surgical field. The operative microscope was the predominant innovation in modern neurosurgery. Recently, a high-definition three-dimensional (3D) exoscope was developed. We describe the first applications in pediatric neurosurgery. The VITOM 3D exoscope (Karl Storz GmbH, Tuttlingen, Germany) was used in pediatric microneurosurgical operations, along with an OPMI PENTERO operative microscope (Carl Zeiss AG, Jena, Germany). Experiences were retrospectively evaluated with five-level Likert items regarding ease of preparation, image definition, magnification, illumination, field of view, ergonomics, accessibility of the surgical field, and general user-friendliness. Three operations were performed: supratentorial open biopsy in the supine position, infratentorial brain tumor resection in the park bench position, and myelomeningocele closure in the prone position. While preparation and image definition were rated equal for microscope and exoscope, the microscope's field of view, illumination, and user-friendliness were considered superior, while the advantages of the exoscope were seen in ergonomics and the accessibility of the surgical field. No complications attributed to visualization mode occurred. In our experience, the VITOM 3D exoscope is an innovative visualization tool with advantages over the microscope in ergonomics and the accessibility of the surgical field. However, improvements were deemed necessary with regard to field of view, illumination, and user-friendliness. While the debate of a "perfect" visualization modality is influenced by personal preference, this novel visualization device has the potential to become a valuable tool in the neurosurgeon's armamentarium.

  7. a Web-Based Interactive Platform for Co-Clustering Spatio-Temporal Data

    NASA Astrophysics Data System (ADS)

    Wu, X.; Poorthuis, A.; Zurita-Milla, R.; Kraak, M.-J.

    2017-09-01

    Since current studies on clustering analysis mainly focus on exploring spatial or temporal patterns separately, a co-clustering algorithm is utilized in this study to enable the concurrent analysis of spatio-temporal patterns. To allow users to adopt and adapt the algorithm for their own analysis, it is integrated within the server side of an interactive web-based platform. The client side of the platform, running within any modern browser, is a graphical user interface (GUI) with multiple linked visualizations that facilitates the understanding, exploration and interpretation of the raw dataset and co-clustering results. Users can also upload their own datasets and adjust clustering parameters within the platform. To illustrate the use of this platform, an annual temperature dataset from 28 weather stations over 20 years in the Netherlands is used. After the dataset is loaded, it is visualized in a set of linked visualizations: a geographical map, a timeline and a heatmap. This aids the user in understanding the nature of their dataset and the appropriate selection of co-clustering parameters. Once the dataset is processed by the co-clustering algorithm, the results are visualized in the small multiples, a heatmap and a timeline to provide various views for better understanding and also further interpretation. Since the visualization and analysis are integrated in a seamless platform, the user can explore different sets of co-clustering parameters and instantly view the results in order to do iterative, exploratory data analysis. As such, this interactive web-based platform allows users to analyze spatio-temporal data using the co-clustering method and also helps the understanding of the results using multiple linked visualizations.

  8. TrajGraph: A Graph-Based Visual Analytics Approach to Studying Urban Network Centralities Using Taxi Trajectory Data.

    PubMed

    Huang, Xiaoke; Zhao, Ye; Yang, Jing; Zhang, Chong; Ma, Chao; Ye, Xinyue

    2016-01-01

    We propose TrajGraph, a new visual analytics method, for studying urban mobility patterns by integrating graph modeling and visual analysis with taxi trajectory data. A special graph is created to store and manifest real traffic information recorded by taxi trajectories over city streets. It conveys urban transportation dynamics which can be discovered by applying graph analysis algorithms. To support interactive, multiscale visual analytics, a graph partitioning algorithm is applied to create region-level graphs which have smaller size than the original street-level graph. Graph centralities, including Pagerank and betweenness, are computed to characterize the time-varying importance of different urban regions. The centralities are visualized by three coordinated views including a node-link graph view, a map view and a temporal information view. Users can interactively examine the importance of streets to discover and assess city traffic patterns. We have implemented a fully working prototype of this approach and evaluated it using massive taxi trajectories of Shenzhen, China. TrajGraph's capability in revealing the importance of city streets was evaluated by comparing the calculated centralities with the subjective evaluations from a group of drivers in Shenzhen. Feedback from a domain expert was collected. The effectiveness of the visual interface was evaluated through a formal user study. We also present several examples and a case study to demonstrate the usefulness of TrajGraph in urban transportation analysis.

  9. Visual Analytics for Heterogeneous Geoscience Data

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Yu, L.; Zhu, F.; Rilee, M. L.; Kuo, K. S.; Jiang, H.; Yu, H.

    2017-12-01

    Geoscience data obtained from diverse sources have been routinely leveraged by scientists to study various phenomena. The principal data sources include observations and model simulation outputs. These data are characterized by spatiotemporal heterogeneity originated from different instrument design specifications and/or computational model requirements used in data generation processes. Such inherent heterogeneity poses several challenges in exploring and analyzing geoscience data. First, scientists often wish to identify features or patterns co-located among multiple data sources to derive and validate certain hypotheses. Heterogeneous data make it a tedious task to search such features in dissimilar datasets. Second, features of geoscience data are typically multivariate. It is challenging to tackle the high dimensionality of geoscience data and explore the relations among multiple variables in a scalable fashion. Third, there is a lack of transparency in traditional automated approaches, such as feature detection or clustering, in that scientists cannot intuitively interact with their analysis processes and interpret results. To address these issues, we present a new scalable approach that can assist scientists in analyzing voluminous and diverse geoscience data. We expose a high-level query interface that allows users to easily express their customized queries to search features of interest across multiple heterogeneous datasets. For identified features, we develop a visualization interface that enables interactive exploration and analytics in a linked-view manner. Specific visualization techniques such as scatter plots to parallel coordinates are employed in each view to allow users to explore various aspects of features. Different views are linked and refreshed according to user interactions in any individual view. In such a manner, a user can interactively and iteratively gain understanding into the data through a variety of visual analytics operations. We demonstrate with use cases how scientists can combine the query and visualization interfaces to enable a customized workflow facilitating studies using heterogeneous geoscience datasets.

  10. Program Supports Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Keith, Stephan

    1994-01-01

    Primary purpose of General Visualization System (GVS) computer program is to support scientific visualization of data generated by panel-method computer program PMARC_12 (inventory number ARC-13362) on Silicon Graphics Iris workstation. Enables user to view PMARC geometries and wakes as wire frames or as light shaded objects. GVS is written in C language.

  11. Simulation and visualization of fundamental optics phenomenon by LabVIEW

    NASA Astrophysics Data System (ADS)

    Lyu, Bohan

    2017-08-01

    Most instructors teach complex phenomenon by equation and static illustration without interactive multimedia. Students usually memorize phenomenon by taking note. However, only note or complex formula can not make user visualize the phenomenon of the photonics system. LabVIEW is a good tool for in automatic measurement. However, the simplicity of coding in LabVIEW makes it not only suit for automatic measurement, but also suitable for simulation and visualization of fundamental optics phenomenon. In this paper, five simple optics phenomenon will be discuss and simulation with LabVIEW. They are Snell's Law, Hermite-Gaussian beam transverse mode, square and circular aperture diffraction, polarization wave and Poincare sphere, and finally Fabry-Perrot etalon in spectrum domain.

  12. 3D Exploration of Meteorological Data: Facing the challenges of operational forecasters

    NASA Astrophysics Data System (ADS)

    Koutek, Michal; Debie, Frans; van der Neut, Ian

    2016-04-01

    In the past years the Royal Netherlands Meteorological Institute (KNMI) has been working on innovation in the field of meteorological data visualization. We are dealing with Numerical Weather Prediction (NWP) model data and observational data, i.e. satellite images, precipitation radar, ground and air-borne measurements. These multidimensional multivariate data are geo-referenced and can be combined in 3D space to provide more intuitive views on the atmospheric phenomena. We developed the Weather3DeXplorer (W3DX), a visualization framework for processing and interactive exploration and visualization using Virtual Reality (VR) technology. We managed to have great successes with research studies on extreme weather situations. In this paper we will elaborate what we have learned from application of interactive 3D visualization in the operational weather room. We will explain how important it is to control the degrees-of-freedom during interaction that are given to the users: forecasters/scientists; (3D camera and 3D slicing-plane navigation appear to be rather difficult for the users, when not implemented properly). We will present a novel approach of operational 3D visualization user interfaces (UI) that for a great deal eliminates the obstacle and the time it usually takes to set up the visualization parameters and an appropriate camera view on a certain atmospheric phenomenon. We have found our inspiration in the way our operational forecasters work in the weather room. We decided to form a bridge between 2D visualization images and interactive 3D exploration. Our method combines WEB-based 2D UI's, pre-rendered 3D visualization catalog for the latest NWP model runs, with immediate entry into interactive 3D session for selected visualization setting. Finally, we would like to present the first user experiences with this approach.

  13. Spacecraft Guidance, Navigation, and Control Visualization Tool

    NASA Technical Reports Server (NTRS)

    Mandic, Milan; Acikmese, Behcet; Blackmore, Lars

    2011-01-01

    G-View is a 3D visualization tool for supporting spacecraft guidance, navigation, and control (GN&C) simulations relevant to small-body exploration and sampling (see figure). The tool is developed in MATLAB using Virtual Reality Toolbox and provides users with the ability to visualize the behavior of their simulations, regardless of which programming language (or machine) is used to generate simulation results. The only requirement is that multi-body simulation data is generated and placed in the proper format before applying G-View.

  14. Systems and methods for interactive virtual reality process control and simulation

    DOEpatents

    Daniel, Jr., William E.; Whitney, Michael A.

    2001-01-01

    A system for visualizing, controlling and managing information includes a data analysis unit for interpreting and classifying raw data using analytical techniques. A data flow coordination unit routes data from its source to other components within the system. A data preparation unit handles the graphical preparation of the data and a data rendering unit presents the data in a three-dimensional interactive environment where the user can observe, interact with, and interpret the data. A user can view the information on various levels, from a high overall process level view, to a view illustrating linkage between variables, to view the hard data itself, or to view results of an analysis of the data. The system allows a user to monitor a physical process in real-time and further allows the user to manage and control the information in a manner not previously possible.

  15. VisPort: Web-Based Access to Community-Specific Visualization Functionality [Shedding New Light on Exploding Stars: Visualization for TeraScale Simulation of Neutrino-Driven Supernovae (Final Technical Report)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, M Pauline

    2007-06-30

    The VisPort visualization portal is an experiment in providing Web-based access to visualization functionality from any place and at any time. VisPort adopts a service-oriented architecture to encapsulate visualization functionality and to support remote access. Users employ browser-based client applications to choose data and services, set parameters, and launch visualization jobs. Visualization products typically images or movies are viewed in the user's standard Web browser. VisPort emphasizes visualization solutions customized for specific application communities. Finally, VisPort relies heavily on XML, and introduces the notion of visualization informatics - the formalization and specialization of information related to the process and productsmore » of visualization.« less

  16. PANDA-view: An easy-to-use tool for statistical analysis and visualization of quantitative proteomics data.

    PubMed

    Chang, Cheng; Xu, Kaikun; Guo, Chaoping; Wang, Jinxia; Yan, Qi; Zhang, Jian; He, Fuchu; Zhu, Yunping

    2018-05-22

    Compared with the numerous software tools developed for identification and quantification of -omics data, there remains a lack of suitable tools for both downstream analysis and data visualization. To help researchers better understand the biological meanings in their -omics data, we present an easy-to-use tool, named PANDA-view, for both statistical analysis and visualization of quantitative proteomics data and other -omics data. PANDA-view contains various kinds of analysis methods such as normalization, missing value imputation, statistical tests, clustering and principal component analysis, as well as the most commonly-used data visualization methods including an interactive volcano plot. Additionally, it provides user-friendly interfaces for protein-peptide-spectrum representation of the quantitative proteomics data. PANDA-view is freely available at https://sourceforge.net/projects/panda-view/. 1987ccpacer@163.com and zhuyunping@gmail.com. Supplementary data are available at Bioinformatics online.

  17. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  18. View-Dependent Streamline Deformation and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Xin; Edwards, John; Chen, Chun-Ming

    Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual cluttering for visualizing 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures.more » Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.« less

  19. Stereoscopic 3D entertainment and its effect on viewing comfort: comparison of children and adults.

    PubMed

    Pölönen, Monika; Järvenpää, Toni; Bilcu, Beatrice

    2013-01-01

    Children's and adults' viewing comfort during stereoscopic three-dimensional film viewing and computer game playing was studied. Certain mild changes in visual function, heterophoria and near point of accommodation values, as well as eyestrain and visually induced motion sickness levels were found when single setups were compared. The viewing system had an influence on viewing comfort, in particular for eyestrain levels, but no clear difference between two- and three-dimensional systems was found. Additionally, certain mild changes in visual functions and visually induced motion sickness levels between adults and children were found. In general, all of the system-task combinations caused mild eyestrain and possible changes in visual functions, but these changes in magnitude were small. According to subjective opinions that further support these measurements, using a stereoscopic three-dimensional system for up to 2 h was acceptable for most of the users regardless of their age. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  20. SCEC-VDO: A New 3-Dimensional Visualization and Movie Making Software for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Milner, K. R.; Sanskriti, F.; Yu, J.; Callaghan, S.; Maechling, P. J.; Jordan, T. H.

    2016-12-01

    Researchers and undergraduate interns at the Southern California Earthquake Center (SCEC) have created a new 3-dimensional (3D) visualization software tool called SCEC Virtual Display of Objects (SCEC-VDO). SCEC-VDO is written in Java and uses the Visualization Toolkit (VTK) backend to render 3D content. SCEC-VDO offers advantages over existing 3D visualization software for viewing georeferenced data beneath the Earth's surface. Many popular visualization packages, such as Google Earth, restrict the user to views of the Earth from above, obstructing views of geological features such as faults and earthquake hypocenters at depth. SCEC-VDO allows the user to view data both above and below the Earth's surface at any angle. It includes tools for viewing global earthquakes from the U.S. Geological Survey, faults from the SCEC Community Fault Model, and results from the latest SCEC models of earthquake hazards in California including UCERF3 and RSQSim. Its object-oriented plugin architecture allows for the easy integration of new regional and global datasets, regardless of the science domain. SCEC-VDO also features rich animation capabilities, allowing users to build a timeline with keyframes of camera position and displayed data. The software is built with the concept of statefulness, allowing for reproducibility and collaboration using an xml file. A prior version of SCEC-VDO, which began development in 2005 under the SCEC Undergraduate Studies in Earthquake Information Technology internship, used the now unsupported Java3D library. Replacing Java3D with the widely supported and actively developed VTK libraries not only ensures that SCEC-VDO can continue to function for years to come, but allows for the export of 3D scenes to web viewers and popular software such as Paraview. SCEC-VDO runs on all recent 64-bit Windows, Mac OS X, and Linux systems with Java 8 or later. More information, including downloads, tutorials, and example movies created fully within SCEC-VDO is available here: http://scecvdo.usc.edu

  1. ParaView visualization of Abaqus output on the mechanical deformation of complex microstructures

    NASA Astrophysics Data System (ADS)

    Liu, Qingbin; Li, Jiang; Liu, Jie

    2017-02-01

    Abaqus® is a popular software suite for finite element analysis. It delivers linear and nonlinear analyses of mechanical and fluid dynamics, includes multi-body system and multi-physics coupling. However, the visualization capability of Abaqus using its CAE module is limited. Models from microtomography have extremely complicated structures, and datasets of Abaqus output are huge, requiring a visualization tool more powerful than Abaqus/CAE. We convert Abaqus output into the XML-based VTK format by developing a Python script and then using ParaView to visualize the results. Such capabilities as volume rendering, tensor glyphs, superior animation and other filters allow ParaView to offer excellent visualizing manifestations. ParaView's parallel visualization makes it possible to visualize very big data. To support full parallel visualization, the Python script achieves data partitioning by reorganizing all nodes, elements and the corresponding results on those nodes and elements. The data partition scheme minimizes data redundancy and works efficiently. Given its good readability and extendibility, the script can be extended to the processing of more different problems in Abaqus. We share the script with Abaqus users on GitHub.

  2. Comparison of User Performance with Interactive and Static 3d Visualization - Pilot Study

    NASA Astrophysics Data System (ADS)

    Herman, L.; Stachoň, Z.

    2016-06-01

    Interactive 3D visualizations of spatial data are currently available and popular through various applications such as Google Earth, ArcScene, etc. Several scientific studies have focused on user performance with 3D visualization, but static perspective views are used as stimuli in most of the studies. The main objective of this paper is to try to identify potential differences in user performance with static perspective views and interactive visualizations. This research is an exploratory study. An experiment was designed as a between-subject study and a customized testing tool based on open web technologies was used for the experiment. The testing set consists of an initial questionnaire, a training task and four experimental tasks. Selection of the highest point and determination of visibility from the top of a mountain were used as the experimental tasks. Speed and accuracy of each task performance of participants were recorded. The movement and actions in the virtual environment were also recorded within the interactive variant. The results show that participants deal with the tasks faster when using static visualization. The average error rate was also higher in the static variant. The findings from this pilot study will be used for further testing, especially for formulating of hypotheses and designing of subsequent experiments.

  3. Augmented Reality Based Doppler Lidar Data Visualization: Promises and Challenges

    NASA Astrophysics Data System (ADS)

    Cherukuru, N. W.; Calhoun, R.

    2016-06-01

    Augmented reality (AR) is a technology in which the enables the user to view virtual content as if it existed in real world. We are exploring the possibility of using this technology to view radial velocities or processed wind vectors from a Doppler wind lidar, thus giving the user an ability to see the wind in a literal sense. This approach could find possible applications in aviation safety, atmospheric data visualization as well as in weather education and public outreach. As a proof of concept, we used the lidar data from a recent field campaign and developed a smartphone application to view the lidar scan in augmented reality. In this paper, we give a brief methodology of this feasibility study, present the challenges and promises of using AR technology in conjunction with Doppler wind lidars.

  4. View-Dependent Streamline Deformation and Exploration

    PubMed Central

    Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R.; Wong, Pak Chung

    2016-01-01

    Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely. PMID:26600061

  5. View-Dependent Streamline Deformation and Exploration.

    PubMed

    Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R; Wong, Pak Chung

    2016-07-01

    Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.

  6. TargetVue: Visual Analysis of Anomalous User Behaviors in Online Communication Systems.

    PubMed

    Cao, Nan; Shi, Conglei; Lin, Sabrina; Lu, Jie; Lin, Yu-Ru; Lin, Ching-Yung

    2016-01-01

    Users with anomalous behaviors in online communication systems (e.g. email and social medial platforms) are potential threats to society. Automated anomaly detection based on advanced machine learning techniques has been developed to combat this issue; challenges remain, though, due to the difficulty of obtaining proper ground truth for model training and evaluation. Therefore, substantial human judgment on the automated analysis results is often required to better adjust the performance of anomaly detection. Unfortunately, techniques that allow users to understand the analysis results more efficiently, to make a confident judgment about anomalies, and to explore data in their context, are still lacking. In this paper, we propose a novel visual analysis system, TargetVue, which detects anomalous users via an unsupervised learning model and visualizes the behaviors of suspicious users in behavior-rich context through novel visualization designs and multiple coordinated contextual views. Particularly, TargetVue incorporates three new ego-centric glyphs to visually summarize a user's behaviors which effectively present the user's communication activities, features, and social interactions. An efficient layout method is proposed to place these glyphs on a triangle grid, which captures similarities among users and facilitates comparisons of behaviors of different users. We demonstrate the power of TargetVue through its application in a social bot detection challenge using Twitter data, a case study based on email records, and an interview with expert users. Our evaluation shows that TargetVue is beneficial to the detection of users with anomalous communication behaviors.

  7. Data Mining.

    ERIC Educational Resources Information Center

    Benoit, Gerald

    2002-01-01

    Discusses data mining (DM) and knowledge discovery in databases (KDD), taking the view that KDD is the larger view of the entire process, with DM emphasizing the cleaning, warehousing, mining, and visualization of knowledge discovery in databases. Highlights include algorithms; users; the Internet; text mining; and information extraction.…

  8. PeptideNavigator: An interactive tool for exploring large and complex data sets generated during peptide-based drug design projects.

    PubMed

    Diller, Kyle I; Bayden, Alexander S; Audie, Joseph; Diller, David J

    2018-01-01

    There is growing interest in peptide-based drug design and discovery. Due to their relatively large size, polymeric nature, and chemical complexity, the design of peptide-based drugs presents an interesting "big data" challenge. Here, we describe an interactive computational environment, PeptideNavigator, for naturally exploring the tremendous amount of information generated during a peptide drug design project. The purpose of PeptideNavigator is the presentation of large and complex experimental and computational data sets, particularly 3D data, so as to enable multidisciplinary scientists to make optimal decisions during a peptide drug discovery project. PeptideNavigator provides users with numerous viewing options, such as scatter plots, sequence views, and sequence frequency diagrams. These views allow for the collective visualization and exploration of many peptides and their properties, ultimately enabling the user to focus on a small number of peptides of interest. To drill down into the details of individual peptides, PeptideNavigator provides users with a Ramachandran plot viewer and a fully featured 3D visualization tool. Each view is linked, allowing the user to seamlessly navigate from collective views of large peptide data sets to the details of individual peptides with promising property profiles. Two case studies, based on MHC-1A activating peptides and MDM2 scaffold design, are presented to demonstrate the utility of PeptideNavigator in the context of disparate peptide-design projects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keefer, Donald A.; Shaffer, Eric G.; Storsved, Brynne

    A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing jointmore » visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less

  10. RVA: A Plugin for ParaView 3.14

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-04

    RVA is a plugin developed for the 64-bit Windows version of the ParaView 3.14 visualization package. RVA is designed to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed onmore » enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less

  11. Location-Driven Image Retrieval for Images Collected by a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji

    Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.

  12. Interactive Learning Modules: Enabling Near Real-Time Oceanographic Data Use In Undergraduate Education

    NASA Astrophysics Data System (ADS)

    Kilb, D. L.; Fundis, A. T.; Risien, C. M.

    2012-12-01

    The focus of the Education and Public Engagement (EPE) component of the NSF's Ocean Observatories Initiative (OOI) is to provide a new layer of cyber-interactivity for undergraduate educators to bring near real-time data from the global ocean into learning environments. To accomplish this, we are designing six online services including: 1) visualization tools, 2) a lesson builder, 3) a concept map builder, 4) educational web services (middleware), 5) collaboration tools and 6) an educational resource database. Here, we report on our Fall 2012 release that includes the first four of these services: 1) Interactive visualization tools allow users to interactively select data of interest, display the data in various views (e.g., maps, time-series and scatter plots) and obtain statistical measures such as mean, standard deviation and a regression line fit to select data. Specific visualization tools include a tool to compare different months of data, a time series explorer tool to investigate the temporal evolution of select data parameters (e.g., sea water temperature or salinity), a glider profile tool that displays ocean glider tracks and associated transects, and a data comparison tool that allows users to view the data either in scatter plot view comparing one parameter with another, or in time series view. 2) Our interactive lesson builder tool allows users to develop a library of online lesson units, which are collaboratively editable and sharable and provides starter templates designed from learning theory knowledge. 3) Our interactive concept map tool allows the user to build and use concept maps, a graphical interface to map the connection between concepts and ideas. This tool also provides semantic-based recommendations, and allows for embedding of associated resources such as movies, images and blogs. 4) Education web services (middleware) will provide an educational resource database API.

  13. Novel Scientific Visualization Interfaces for Interactive Information Visualization and Sharing

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2012-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools in the Iowa Flood Information System (IFIS), developed within the light of these challenges. The IFIS is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS. 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods.

  14. Virtual Reality Visualization of Permafrost Dynamics Along a Transect Through Northern Alaska

    NASA Astrophysics Data System (ADS)

    Chappell, G. G.; Brody, B.; Webb, P.; Chord, J.; Romanovsky, V.; Tipenko, G.

    2004-12-01

    Understanding permafrost dynamics poses a significant challenge for researchers and planners. Our project uses nontraditional visualization tools to create a 3-D interactive virtual-reality environment in which permafrost dynamics can be explored and experimented with. We have incorporated a numerical soil temperature model by Gennadiy Tipenko and Vladimir Romanovsky of the Geophysical institute at the University of Alaska Fairbanks into an animated tour in space and time in the virtual reality facility of the Arctic Region Supercomputing Center at the University of Alaska Fairbanks. The software is being written by undergraduate interns Patrick Webb and Jordanna Chord under the direction of Professors Chappell and Brody. When using our software, the user appears to be surrounded by a 3-D computer-generated model of the state of Alaska. The eastern portion of the state is displaced upward from the western portion. The data are represented on an animated vertical strip running between the two parts, as if eastern Alaska were raised up, and the soil at the cut could be viewed. We use coloring to highlight significant properties and features of the soil: temperature, the active layer, etc. The user can view data from various parts of the state simply by walking to the appropriate location in the model, or by using a flying-style interface to cover longer distances. Using a control panel, the user can also alter the time, viewing the data for a particular date, or watching the data change with time: a high-speed movie in which long-term changes in permafrost are readily apparent. In the second phase of the project, we connect the visualization directly to the model, running in real time. We allow the user to manipulate the input data and get immediate visual feedback. For example, the user might specify the kind and placement of ground cover, by ``painting'' snowpack, plant species, or fire damage, and be able to see the effect on permafrost stability with no significant time lag.

  15. Spherical Panoramas for Astrophysical Data Visualization

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2017-05-01

    Data immersion has advantages in astrophysical visualization. Complex multi-dimensional data and phase spaces can be explored in a seamless and interactive viewing environment. Putting the user in the data is a first step toward immersive data analysis. We present a technique for creating 360° spherical panoramas with astrophysical data. The three-dimensional software package Blender and the Google Spatial Media module are used together to immerse users in data exploration. Several examples employing these methods exhibit how the technique works using different types of astronomical data.

  16. Using perceptual rules in interactive visualization

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Treinish, Lloyd A.

    1994-05-01

    In visualization, data are represented as variations in grayscale, hue, shape, and texture. They can be mapped to lines, surfaces, and glyphs, and can be represented statically or in animation. In modem visualization systems, the choices for representing data seem unlimited. This is both a blessing and a curse, however, since the visual impression created by the visualization depends critically on which dimensions are selected for representing the data (Bertin, 1967; Tufte, 1983; Cleveland, 1991). In modem visualization systems, the user can interactively select many different mapping and representation operations, and can interactively select processing operations (e.g., applying a color map), realization operations (e.g., generating geometric structures such as contours or streamlines), and rendering operations (e.g., shading or ray-tracing). The user can, for example, map data to a color map, then apply contour lines, then shift the viewing angle, then change the color map again, etc. In many systems, the user can vary the choices for each operation, selecting, for example, particular color maps, contour characteristics, and shading techniques. The hope is that this process will eventually converge on a visual representation which expresses the structure of the data and effectively communicates its message in a way that meets the user's goals. Sometimes, however, it results in visual representations which are confusing, misleading, and garish.

  17. Science Opportunity Analyzer (SOA) Version 8

    NASA Technical Reports Server (NTRS)

    Witoff, Robert J.; Polanskey, Carol A.; Aguinaldo, Anna Marie A.; Liu, Ning; Hofstadter, Mark D.

    2013-01-01

    SOA allows scientists to plan spacecraft observations. It facilitates the identification of geometrically interesting times in a spacecraft s orbit that a user can use to plan observations or instrument-driven spacecraft maneuvers. These observations can then be visualized multiple ways in both two- and three-dimensional views. When observations have been optimized within a spacecraft's flight rules, the resulting plans can be output for use by other JPL uplink tools. Now in its eighth major version, SOA improves on these capabilities in a modern and integrated fashion. SOA consists of five major functions: Opportunity Search, Visualization, Observation Design, Constraint Checking, and Data Output. Opportunity Search is a GUI-driven interface to existing search engines that can be used to identify times when a spacecraft is in a specific geometrical relationship with other bodies in the solar system. This function can be used for advanced mission planning as well as for making last-minute adjustments to mission sequences in response to trajectory modifications. Visualization is a key aspect of SOA. The user can view observation opportunities in either a 3D representation or as a 2D map projection. Observation Design allows the user to orient the spacecraft and visualize the projection of the instrument field of view for that orientation using the same views as Opportunity Search. Constraint Checking is provided to validate various geometrical and physical aspects of an observation design. The user has the ability to easily create custom rules or to use official project-generated flight rules. This capability may also allow scientists to easily assess the cost to science if flight rule changes occur. Data Output allows the user to compute ancillary data related to an observation or to a given position of the spacecraft along its trajectory. The data can be saved as a tab-delimited text file or viewed as a graph. SOA combines science planning functionality unique to both JPL and the sponsoring spacecraft. SOA is able to ingest JPL SPICE Kernels that are used to drive the tool and its computations. A Percy search engine is then included that identifies interesting time periods for the user to build observations. When observations are then built, flight-like orientation algorithms replicate spacecraft dynamics to closely simulate the flight spacecraft s dynamics. SOA v8 represents large steps forward from SOA v7 in terms of quality, reliability, maintainability, efficiency, and user experience. A tailored agile development environment has been built around SOA that provides automated unit testing, continuous build and integration, a consolidated Web-based code and documentation storage environment, modern Java enhancements, and a focus on usability

  18. Issues in visual support to real-time space system simulation solved in the Systems Engineering Simulator

    NASA Technical Reports Server (NTRS)

    Yuen, Vincent K.

    1989-01-01

    The Systems Engineering Simulator has addressed the major issues in providing visual data to its real-time man-in-the-loop simulations. Out-the-window views and CCTV views are provided by three scene systems to give the astronauts their real-world views. To expand the window coverage for the Space Station Freedom workstation a rotating optics system is used to provide the widest field of view possible. To provide video signals to as many viewpoints as possible, windows and CCTVs, with a limited amount of hardware, a video distribution system has been developed to time-share the video channels among viewpoints at the selection of the simulation users. These solutions have provided the visual simulation facility for real-time man-in-the-loop simulations for the NASA space program.

  19. Toward User Interfaces and Data Visualization Criteria for Learning Design of Digital Textbooks

    ERIC Educational Resources Information Center

    Railean, Elena

    2014-01-01

    User interface and data visualisation criteria are central issues in digital textbooks design. However, when applying mathematical modelling of learning process to the analysis of the possible solutions, it could be observed that results differ. Mathematical learning views cognition in on the base on statistics and probability theory, graph…

  20. Figure 1 from Integrative Genomics Viewer: Visualizing Big Data | Office of Cancer Genomics

    Cancer.gov

    A screenshot of the IGV user interface at the chromosome view. IGV user interface showing five data types (copy number, methylation, gene expression, and loss of heterozygosity; mutations are overlaid with black boxes) from approximately 80 glioblastoma multiforme samples. Adapted from Figure S1; Robinson et al. 2011

  1. Immersive Earth Science: Data Visualization in Virtual Reality

    NASA Astrophysics Data System (ADS)

    Skolnik, S.; Ramirez-Linan, R.

    2017-12-01

    Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.

  2. Empirical Comparison of Visualization Tools for Larger-Scale Network Analysis

    DOE PAGES

    Pavlopoulos, Georgios A.; Paez-Espino, David; Kyrpides, Nikos C.; ...

    2017-07-18

    Gene expression, signal transduction, protein/chemical interactions, biomedical literature cooccurrences, and other concepts are often captured in biological network representations where nodes represent a certain bioentity and edges the connections between them. While many tools to manipulate, visualize, and interactively explore such networks already exist, only few of them can scale up and follow today’s indisputable information growth. In this review, we shortly list a catalog of available network visualization tools and, from a user-experience point of view, we identify four candidate tools suitable for larger-scale network analysis, visualization, and exploration. Lastly, we comment on their strengths and their weaknesses andmore » empirically discuss their scalability, user friendliness, and postvisualization capabilities.« less

  3. Empirical Comparison of Visualization Tools for Larger-Scale Network Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlopoulos, Georgios A.; Paez-Espino, David; Kyrpides, Nikos C.

    Gene expression, signal transduction, protein/chemical interactions, biomedical literature cooccurrences, and other concepts are often captured in biological network representations where nodes represent a certain bioentity and edges the connections between them. While many tools to manipulate, visualize, and interactively explore such networks already exist, only few of them can scale up and follow today’s indisputable information growth. In this review, we shortly list a catalog of available network visualization tools and, from a user-experience point of view, we identify four candidate tools suitable for larger-scale network analysis, visualization, and exploration. Lastly, we comment on their strengths and their weaknesses andmore » empirically discuss their scalability, user friendliness, and postvisualization capabilities.« less

  4. Display Device Color Management and Visual Surveillance of Vehicles

    ERIC Educational Resources Information Center

    Srivastava, Satyam

    2011-01-01

    Digital imaging has seen an enormous growth in the last decade. Today users have numerous choices in creating, accessing, and viewing digital image/video content. Color management is important to ensure consistent visual experience across imaging systems. This is typically achieved using color profiles. In this thesis we identify the limitations…

  5. A computer graphics system for visualizing spacecraft in orbit

    NASA Technical Reports Server (NTRS)

    Eyles, Don E.

    1989-01-01

    To carry out unanticipated operations with resources already in space is part of the rationale for a permanently manned space station in Earth orbit. The astronauts aboard a space station will require an on-board, spatial display tool to assist the planning and rehearsal of upcoming operations. Such a tool can also help astronauts to monitor and control such operations as they occur, especially in cases where first-hand visibility is not possible. A computer graphics visualization system designed for such an application and currently implemented as part of a ground-based simulation is described. The visualization system presents to the user the spatial information available in the spacecraft's computers by drawing a dynamic picture containing the planet Earth, the Sun, a star field, and up to two spacecraft. The point of view within the picture can be controlled by the user to obtain a number of specific visualization functions. The elements of the display, the methods used to control the display's point of view, and some of the ways in which the system can be used are described.

  6. 2016 CSSE L3 Milestone: Deliver In Situ to XTD End Users

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patchett, John M.; Nouanesengsy, Boonthanome; Fasel, Patricia Kroll

    This report summarizes the activities in FY16 toward satisfying the CSSE 2016 L3 milestone to deliver in situ to XTD end users of EAP codes. The Milestone was accomplished with ongoing work to ensure the capability is maintained and developed. Two XTD end users used the in situ capability in Rage. A production ParaView capability was created in the HPC and Desktop environment. Two new capabilities were added to ParaView in support of an EAP in situ workflow. We also worked with various support groups at the lab to deploy a production ParaView in the LANL environment for both desktopmore » and HPC systems. . In addition, for this milestone, we moved two VTK based filters from research objects into the production ParaView code to support a variety of standard visualization pipelines for our EAP codes.« less

  7. Advanced Visualization and Interactive Displays (AVID)

    DTIC Science & Technology

    2009-04-01

    decision maker. The ACESViewer architecture allows the users to pull data from databases, flat files, or user generated via scripting. The...of the equation and is of critical concern as it scales the needs of the polygon fill operations. Numerous users are now using two 30” cinema ...6 module configuration. Based on the architecture of the lab there was only one location that would be suitable without any viewing obstructions

  8. Gnome View: A tool for visual representation of human genome data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelkey, J.E.; Thomas, G.S.; Thurman, D.A.

    1993-02-01

    GnomeView is a tool for exploring data generated by the Human Gemone Project. GnomeView provides both graphical and textural styles of data presentation: employs an intuitive window-based graphical query interface: and integrates its underlying genome databases in such a way that the user can navigate smoothly across databases and between different levels of data. This paper describes GnomeView and discusses how it addresses various genome informatics issues.

  9. DIA2: Web-based Cyberinfrastructure for Visual Analysis of Funding Portfolios.

    PubMed

    Madhavan, Krishna; Elmqvist, Niklas; Vorvoreanu, Mihaela; Chen, Xin; Wong, Yuetling; Xian, Hanjun; Dong, Zhihua; Johri, Aditya

    2014-12-01

    We present a design study of the Deep Insights Anywhere, Anytime (DIA2) platform, a web-based visual analytics system that allows program managers and academic staff at the U.S. National Science Foundation to search, view, and analyze their research funding portfolio. The goal of this system is to facilitate users' understanding of both past and currently active research awards in order to make more informed decisions of their future funding. This user group is characterized by high domain expertise yet not necessarily high literacy in visualization and visual analytics-they are essentially casual experts-and thus require careful visual and information design, including adhering to user experience standards, providing a self-instructive interface, and progressively refining visualizations to minimize complexity. We discuss the challenges of designing a system for casual experts and highlight how we addressed this issue by modeling the organizational structure and workflows of the NSF within our system. We discuss each stage of the design process, starting with formative interviews, prototypes, and finally live deployments and evaluation with stakeholders.

  10. Improving visual search in instruction manuals using pictograms.

    PubMed

    Kovačević, Dorotea; Brozović, Maja; Možina, Klementina

    2016-11-01

    Instruction manuals provide important messages about the proper use of a product. They should communicate in such a way that they facilitate users' searches for specific information. Despite the increasing research interest in visual search, there is a lack of empirical knowledge concerning the role of pictograms in search performance during the browsing of a manual's pages. This study investigates how the inclusion of pictograms improves the search for the target information. Furthermore, it examines whether this search process is influenced by the visual similarity between the pictograms and the searched for information. On the basis of eye-tracking measurements, as objective indicators of the participants' visual attention, it was found that pictograms can be a useful element of search strategy. Another interesting finding was that boldface highlighting is a more effective method for improving user experience in information seeking, rather than the similarity between the pictorial and adjacent textual information. Implications for designing effective user manuals are discussed. Practitioner Summary: Users often view instruction manuals with the aim of finding specific information. We used eye-tracking technology to examine different manual pages in order to improve the user's visual search for target information. The results indicate that the use of pictograms and bold highlighting of relevant information facilitate the search process.

  11. Figure 4 from Integrative Genomics Viewer: Visualizing Big Data | Office of Cancer Genomics

    Cancer.gov

    Gene-list view of genomic data. The gene-list view allows users to compare data across a set of loci. The data in this figure includes copy number, mutation, and clinical data from 202 glioblastoma samples from TCGA. Adapted from Figure 7; Thorvaldsdottir H et al. 2012

  12. ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.

    PubMed

    Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng

    2017-08-30

    While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.

  13. Science opportunity analyzer - a multi-mission tool for planning

    NASA Technical Reports Server (NTRS)

    Streiffert, B. A.; Polanskey, C. A.; O'Reilly, T.; Colwell, J.

    2002-01-01

    For many years the diverse scientific community that supports JPL's wide variety ofinterplanetary space missions has needed a tool in order to plan and develop their experiments. The tool needs to be easily adapted to various mission types and portable to the user community. The Science Opportunity Analyzer, SOA, now in its third year of development, is intended to meet this need. SOA is a java-based application that is designed to enable scientists to identify and analyze opportunities for science observations from spacecraft. It differs from other planning tools in that it does not require an in-depth knowledge of the spacecraft command system or operation modes to begin high level planning. Users can, however, develop increasingly detailed levels of design. SOA consists of six major functions: Opportunity Search, Visualization, Observation Design, Constraint Checking, Data Output and Communications. Opportunity Search is a GUI driven interface to existing search engines that can be used to identify times when a spacecraft is in a specific geometrical relationship with other bodies in the solar system. This function can be used for advanced mission planning as well as for making last minute adjustments to mission sequences in response to trajectory modifications. Visualization is a key aspect of SOA. The user can view observation opportunities in either a 3D representation or as a 2D map projection. The user is given extensive flexibility to customize what is displayed in the view. Observation Design allows the user to orient the spacecraft and visualize the projection of the instrument field of view for that orientation using the same views as Opportunity Search. Constraint Checking is provided to validate various geometrical and physical aspects of an observation design. The user has the ability to easily create custom rules or to use official project-generated flight rules. This capability may also allow scientists to easily impact the cost to science if flight rule changes occur. Data Output generates information based on the spacecraft's trajectory, opportunity search results or based on a created observation. The data can be viewed either in tabular format or as a graph. Finally, SOA is unique in that it is designed to be able to communicate with a variety of existing planning and sequencing tools. From the very beginning SOA was designed with the user in mind. Extensive surveys of the potential user community were conducted in order to develop the software requirements. Throughout the development period, close ties have been maintained with the science community to insure that the tool maintains its user focus. Although development is still in its early stages, SOA is already developing a user community on the Cassini project, which is depending on this tool for their science planning. There are other tools at JPL that do various pieces of what SOA can do; however, there is no other tool which combines all these functions and presents them to the user in such a convenient, cohesive, and easy to use fashion.

  14. Multi-focused geospatial analysis using probes.

    PubMed

    Butkiewicz, Thomas; Dou, Wenwen; Wartell, Zachary; Ribarsky, William; Chang, Remco

    2008-01-01

    Traditional geospatial information visualizations often present views that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in for local inspection, spatial awareness and comparison between regions become limited. In our model, coordinated visualizations are integrated within individual probe interfaces, which depict the local data in user-defined regions-of-interest. Our probe concept can be incorporated into a variety of geospatial visualizations to empower users with the ability to observe, coordinate, and compare data across multiple local regions. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. We illustrate the effectiveness of our technique over traditional interfaces by incorporating it within three existing geospatial visualization systems: an agent-based social simulation, a census data exploration tool, and an 3D GIS environment for analyzing urban change over time. In each case, the probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users.

  15. User Directed Tools for Exploiting Expert Knowledge in an Immersive Segmentation and Visualization Environment

    NASA Technical Reports Server (NTRS)

    Senger, Steven O.

    1998-01-01

    Volumetric data sets have become common in medicine and many sciences through technologies such as computed x-ray tomography (CT), magnetic resonance (MR), positron emission tomography (PET), confocal microscopy and 3D ultrasound. When presented with 2D images humans immediately and unconsciously begin a visual analysis of the scene. The viewer surveys the scene identifying significant landmarks and building an internal mental model of presented information. The identification of features is strongly influenced by the viewers expectations based upon their expert knowledge of what the image should contain. While not a conscious activity, the viewer makes a series of choices about how to interpret the scene. These choices occur in parallel with viewing the scene and effectively change the way the viewer sees the image. It is this interaction of viewing and choice which is the basis of many familiar visual illusions. This is especially important in the interpretation of medical images where it is the expert knowledge of the radiologist which interprets the image. For 3D data sets this interaction of view and choice is frustrated because choices must precede the visualization of the data set. It is not possible to visualize the data set with out making some initial choices which determine how the volume of data is presented to the eye. These choices include, view point orientation, region identification, color and opacity assignments. Further compounding the problem is the fact that these visualization choices are defined in terms of computer graphics as opposed to language of the experts knowledge. The long term goal of this project is to develop an environment where the user can interact with volumetric data sets using tools which promote the utilization of expert knowledge by incorporating visualization and choice into a tight computational loop. The tools will support activities involving the segmentation of structures, construction of surface meshes and local filtering of the data set. To conform to this environment tools should have several key attributes. First, they should be only rely on computations over a local neighborhood of the probe position. Second, they should operate iteratively over time converging towards a limit behavior. Third, they should adapt to user input modifying they operational parameters with time.

  16. User-Centered Evaluation of Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean C.

    Visual analytics systems are becoming very popular. More domains now use interactive visualizations to analyze the ever-increasing amount and heterogeneity of data. More novel visualizations are being developed for more tasks and users. We need to ensure that these systems can be evaluated to determine that they are both useful and usable. A user-centered evaluation for visual analytics needs to be developed for these systems. While many of the typical human-computer interaction (HCI) evaluation methodologies can be applied as is, others will need modification. Additionally, new functionality in visual analytics systems needs new evaluation methodologies. There is a difference betweenmore » usability evaluations and user-centered evaluations. Usability looks at the efficiency, effectiveness, and user satisfaction of users carrying out tasks with software applications. User-centered evaluation looks more specifically at the utility provided to the users by the software. This is reflected in the evaluations done and in the metrics used. In the visual analytics domain this is very challenging as users are most likely experts in a particular domain, the tasks they do are often not well defined, the software they use needs to support large amounts of different kinds of data, and often the tasks last for months. These difficulties are discussed more in the section on User-centered Evaluation. Our goal is to provide a discussion of user-centered evaluation practices for visual analytics, including existing practices that can be carried out and new methodologies and metrics that need to be developed and agreed upon by the visual analytics community. The material provided here should be of use for both researchers and practitioners in the field of visual analytics. Researchers and practitioners in HCI and interested in visual analytics will find this information useful as well as a discussion on changes that need to be made to current HCI practices to make them more suitable to visual analytics. A history of analysis and analysis techniques and problems is provided as well as an introduction to user-centered evaluation and various evaluation techniques for readers from different disciplines. The understanding of these techniques is imperative if we wish to support analysis in the visual analytics software we develop. Currently the evaluations that are conducted and published for visual analytics software are very informal and consist mainly of comments from users or potential users. Our goal is to help researchers in visual analytics to conduct more formal user-centered evaluations. While these are time-consuming and expensive to carryout, the outcomes of these studies will have a defining impact on the field of visual analytics and help point the direction for future features and visualizations to incorporate. While many researchers view work in user-centered evaluation as a less-than-exciting area to work, the opposite is true. First of all, the goal is user-centered evaluation is to help visual analytics software developers, researchers, and designers improve their solutions and discover creative ways to better accommodate their users. Working with the users is extremely rewarding as well. While we use the term “users” in almost all situations there are a wide variety of users that all need to be accommodated. Moreover, the domains that use visual analytics are varied and expanding. Just understanding the complexities of a number of these domains is exciting. Researchers are trying out different visualizations and interactions as well. And of course, the size and variety of data are expanding rapidly. User-centered evaluation in this context is rapidly changing. There are no standard processes and metrics and thus those of us working on user-centered evaluation must be creative in our work with both the users and with the researchers and developers.« less

  17. cisPath: an R/Bioconductor package for cloud users for visualization and management of functional protein interaction networks.

    PubMed

    Wang, Likun; Yang, Luhe; Peng, Zuohan; Lu, Dan; Jin, Yan; McNutt, Michael; Yin, Yuxin

    2015-01-01

    With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services.

  18. cisPath: an R/Bioconductor package for cloud users for visualization and management of functional protein interaction networks

    PubMed Central

    2015-01-01

    Background With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. Results With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. Conclusions This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services. PMID:25708840

  19. JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays

    USGS Publications Warehouse

    Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.

    2004-01-01

    JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.

  20. WiseView: Visualizing motion and variability of faint WISE sources

    NASA Astrophysics Data System (ADS)

    Caselden, Dan; Westin, Paul, III; Meisner, Aaron; Kuchner, Marc; Colin, Guillaume

    2018-06-01

    WiseView renders image blinks of Wide-field Infrared Survey Explorer (WISE) coadds spanning a multi-year time baseline in a browser. The software allows for easy visual identification of motion and variability for sources far beyond the single-frame detection limit, a key threshold not surmounted by many studies. WiseView transparently gathers small image cutouts drawn from many terabytes of unWISE coadds, facilitating access to this large and unique dataset. Users need only input the coordinates of interest and can interactively tune parameters including the image stretch, colormap and blink rate. WiseView was developed in the context of the Backyard Worlds: Planet 9 citizen science project, and has enabled hundreds of brown dwarf candidate discoveries by citizen scientists and professional astronomers.

  1. Design and implementation of visualization methods for the CHANGES Spatial Decision Support System

    NASA Astrophysics Data System (ADS)

    Cristal, Irina; van Westen, Cees; Bakker, Wim; Greiving, Stefan

    2014-05-01

    The CHANGES Spatial Decision Support System (SDSS) is a web-based system aimed for risk assessment and the evaluation of optimal risk reduction alternatives at local level as a decision support tool in long-term natural risk management. The SDSS use multidimensional information, integrating thematic, spatial, temporal and documentary data. The role of visualization in this context becomes of vital importance for efficiently representing each dimension. This multidimensional aspect of the required for the system risk information, combined with the diversity of the end-users imposes the use of sophisticated visualization methods and tools. The key goal of the present work is to exploit efficiently the large amount of data in relation to the needs of the end-user, utilizing proper visualization techniques. Three main tasks have been accomplished for this purpose: categorization of the end-users, the definition of system's modules and the data definition. The graphical representation of the data and the visualization tools were designed to be relevant to the data type and the purpose of the analysis. Depending on the end-users category, each user should have access to different modules of the system and thus, to the proper visualization environment. The technologies used for the development of the visualization component combine the latest and most innovative open source JavaScript frameworks, such as OpenLayers 2.13.1, ExtJS 4 and GeoExt 2. Moreover, the model-view-controller (MVC) pattern is used in order to ensure flexibility of the system at the implementation level. Using the above technologies, the visualization techniques implemented so far offer interactive map navigation, querying and comparison tools. The map comparison tools are of great importance within the SDSS and include the following: swiping tool for comparison of different data of the same location; raster subtraction for comparison of the same phenomena varying in time; linked views for comparison of data from different locations and a time slider tool for monitoring changes in spatio-temporal data. All these techniques are part of the interactive interface of the system and make use of spatial and spatio-temporal data. Further significant aspects of the visualization component include conventional cartographic techniques and visualization of non-spatial data. The main expectation from the present work is to offer efficient visualization of risk-related data in order to facilitate the decision making process, which is the final purpose of the CHANGES SDSS. This work is part of the "CHANGES" project, funded by the European Community's 7th Framework Programme.

  2. Dynamix: dynamic visualization by automatic selection of informative tracks from hundreds of genomic datasets.

    PubMed

    Monfort, Matthias; Furlong, Eileen E M; Girardot, Charles

    2017-07-15

    Visualization of genomic data is fundamental for gaining insights into genome function. Yet, co-visualization of a large number of datasets remains a challenge in all popular genome browsers and the development of new visualization methods is needed to improve the usability and user experience of genome browsers. We present Dynamix, a JBrowse plugin that enables the parallel inspection of hundreds of genomic datasets. Dynamix takes advantage of a priori knowledge to automatically display data tracks with signal within a genomic region of interest. As the user navigates through the genome, Dynamix automatically updates data tracks and limits all manual operations otherwise needed to adjust the data visible on screen. Dynamix also introduces a new carousel view that optimizes screen utilization by enabling users to independently scroll through groups of tracks. Dynamix is hosted at http://furlonglab.embl.de/Dynamix . charles.girardot@embl.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  3. EnsembleGraph: Interactive Visual Analysis of Spatial-Temporal Behavior for Ensemble Simulation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Qingya; Guo, Hanqi; Che, Limei

    We present a novel visualization framework—EnsembleGraph— for analyzing ensemble simulation data, in order to help scientists understand behavior similarities between ensemble members over space and time. A graph-based representation is used to visualize individual spatiotemporal regions with similar behaviors, which are extracted by hierarchical clustering algorithms. A user interface with multiple-linked views is provided, which enables users to explore, locate, and compare regions that have similar behaviors between and then users can investigate and analyze the selected regions in detail. The driving application of this paper is the studies on regional emission influences over tropospheric ozone, which is based onmore » ensemble simulations conducted with different anthropogenic emission absences using the MOZART-4 (model of ozone and related tracers, version 4) model. We demonstrate the effectiveness of our method by visualizing the MOZART-4 ensemble simulation data and evaluating the relative regional emission influences on tropospheric ozone concentrations. Positive feedbacks from domain experts and two case studies prove efficiency of our method.« less

  4. VIEW-Station software and its graphical user interface

    NASA Astrophysics Data System (ADS)

    Kawai, Tomoaki; Okazaki, Hiroshi; Tanaka, Koichiro; Tamura, Hideyuki

    1992-04-01

    VIEW-Station is a workstation-based image processing system which merges the state-of-the- art software environment of Unix with the computing power of a fast image processor. VIEW- Station has a hierarchical software architecture, which facilitates device independence when porting across various hardware configurations, and provides extensibility in the development of application systems. The core image computing language is V-Sugar. V-Sugar provides a set of image-processing datatypes and allows image processing algorithms to be simply expressed, using a functional notation. VIEW-Station provides a hardware independent window system extension called VIEW-Windows. In terms of GUI (Graphical User Interface) VIEW-Station has two notable aspects. One is to provide various types of GUI as visual environments for image processing execution. Three types of interpreters called (mu) V- Sugar, VS-Shell and VPL are provided. Users may choose whichever they prefer based on their experience and tasks. The other notable aspect is to provide facilities to create GUI for new applications on the VIEW-Station system. A set of widgets are available for construction of task-oriented GUI. A GUI builder called VIEW-Kid is developed for WYSIWYG interactive interface design.

  5. OpinionFlow: Visual Analysis of Opinion Diffusion on Social Media.

    PubMed

    Wu, Yingcai; Liu, Shixia; Yan, Kai; Liu, Mengchen; Wu, Fangzhao

    2014-12-01

    It is important for many different applications such as government and business intelligence to analyze and explore the diffusion of public opinions on social media. However, the rapid propagation and great diversity of public opinions on social media pose great challenges to effective analysis of opinion diffusion. In this paper, we introduce a visual analysis system called OpinionFlow to empower analysts to detect opinion propagation patterns and glean insights. Inspired by the information diffusion model and the theory of selective exposure, we develop an opinion diffusion model to approximate opinion propagation among Twitter users. Accordingly, we design an opinion flow visualization that combines a Sankey graph with a tailored density map in one view to visually convey diffusion of opinions among many users. A stacked tree is used to allow analysts to select topics of interest at different levels. The stacked tree is synchronized with the opinion flow visualization to help users examine and compare diffusion patterns across topics. Experiments and case studies on Twitter data demonstrate the effectiveness and usability of OpinionFlow.

  6. Coordinating Council. Fifth Meeting: Quality

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This NASA Scientific and Technical Information Program Coordinating Council meeting had a theme of Quality. Four presentations were made with the following titles: How much quality can you pay for?, What the Center for AeroSpace Information has done to improve quality, Quality from the user standpoint, and Database quality: user views test producer perception. Visuals as well as discussion summaries are also included.

  7. The Effects of Visual Magnification and Physical Movement Scale on the Manipulation of a Tool with Indirect Vision

    ERIC Educational Resources Information Center

    Bohan, Michael; McConnell, Daniel S.; Chaparro, Alex; Thompson, Shelby G.

    2010-01-01

    Modern tools often separate the visual and physical aspects of operation, requiring users to manipulate an instrument while viewing the results indirectly on a display. This can pose usability challenges particularly in applications, such as laparoscopic surgery, that require a high degree of movement precision. Magnification used to augment the…

  8. SimGraph: A Flight Simulation Data Visualization Workstation

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Kenney, Patrick S.

    1997-01-01

    Today's modern flight simulation research produces vast amounts of time sensitive data, making a qualitative analysis of the data difficult while it remains in a numerical representation. Therefore, a method of merging related data together and presenting it to the user in a more comprehensible format is necessary. Simulation Graphics (SimGraph) is an object-oriented data visualization software package that presents simulation data in animated graphical displays for easy interpretation. Data produced from a flight simulation is presented by SimGraph in several different formats, including: 3-Dimensional Views, Cockpit Control Views, Heads-Up Displays, Strip Charts, and Status Indicators. SimGraph can accommodate the addition of new graphical displays to allow the software to be customized to each user s particular environment. A new display can be developed and added to SimGraph without having to design a new application, allowing the graphics programmer to focus on the development of the graphical display. The SimGraph framework can be reused for a wide variety of visualization tasks. Although it was created for the flight simulation facilities at NASA Langley Research Center, SimGraph can be reconfigured to almost any data visualization environment. This paper describes the capabilities and operations of SimGraph.

  9. Lunar Mapping and Modeling On-the-Go: A mobile framework for viewing and interacting with large geospatial datasets

    NASA Astrophysics Data System (ADS)

    Chang, G.; Kim, R.; Bui, B.; Sadaqathullah, S.; Law, E.; Malhotra, S.

    2012-12-01

    The Lunar Mapping and Modeling Portal (LMMP, https://www.lmmp.nasa.gov/) is a collaboration between four NASA centers, JPL, Marshall, Goddard, and Ames, along with the USGS and US Army to provide a centralized geospatial repository for storing processed lunar data collected from the Apollo missions to the latest data acquired by the Lunar Reconnaissance Orbiter (LRO). We offer various scientific and visualization tools to analyze rock and crater densities, lighting maps, thermal measurements, mineral concentrations, slope hazards, and digital elevation maps with the intention of serving not only scientists and lunar mission planners, but also the general public. The project has pioneered in leveraging new technologies and embracing new computing paradigms to create a system that is sophisticated, secure, robust, and scalable all the while being easy to use, streamlined, and modular. We have led innovations through the use of a hybrid cloud infrastructure, authentication through various sources, and utilizing an in-house GIS framework, TWMS (TiledWMS) as well as the commercial ArcGIS product from ESRI. On the client end, we also provide a Flash GUI framework as well as REST web services to interact with the portal. We have also developed a visualization framework on mobile devices, specifically Apple's iOS, which allows anyone from anywhere to interact with LMMP. At the most basic level, the framework allows users to browse LMMP's entire catalog of over 600 data imagery products ranging from global basemaps to LRO's Narrow Angle Camera (NAC) images that provide details of up to .5 meters/pixel. Users are able to view map metadata and can zoom in and out as well as pan around the entire lunar surface with the appropriate basemap. They can arbitrarily stack the maps and images on top of each other to show a layered view of the surface with layer transparency adjusted to suit the user's desired look. Once the user has selected a combination of layers, he can also bookmark those layers for quick access in subsequent sessions. A search tool is also provided to allow users to quickly find points of interests on the moon and to view the auxiliary data associated with that feature. More advanced features include the ability to interact with the data. Using the services provided by the portal, users will be able to log in and access the same scientific analysis tools provided on the web site including measuring between two points, generating subsets, and running other analysis tools, all by using a customized touch interface that are immediately familiar to users of these smart mobile devices. Users can also access their own storage on the portal and view or send the data to other users. Finally, there are features that will utilize functionality that can only be enabled by mobile devices. This includes the use of the gyroscopes and motion sensors to provide a haptic interface visualize lunar data in 3D, on the device as well as potentially on a large screen. The mobile framework that we have developed for LMMP provides a glimpse of what is possible in visualizing and manipulating large geospatial data on small portable devices. While the framework is currently tuned to our portal, we hope that we can generalize the tool to use data sources from any type of GIS services.

  10. Innovative Visualization Techniques applied to a Flood Scenario

    NASA Astrophysics Data System (ADS)

    Falcão, António; Ho, Quan; Lopes, Pedro; Malamud, Bruce D.; Ribeiro, Rita; Jern, Mikael

    2013-04-01

    The large and ever-increasing amounts of multi-dimensional, time-varying and geospatial digital information from multiple sources represent a major challenge for today's analysts. We present a set of visualization techniques that can be used for the interactive analysis of geo-referenced and time sampled data sets, providing an integrated mechanism and that aids the user to collaboratively explore, present and communicate visually complex and dynamic data. Here we present these concepts in the context of a 4 hour flood scenario from Lisbon in 2010, with data that includes measures of water column (flood height) every 10 minutes at a 4.5 m x 4.5 m resolution, topography, building damage, building information, and online base maps. Techniques we use include web-based linked views, multiple charts, map layers and storytelling. We explain two of these in more detail that are not currently in common use for visualization of data: storytelling and web-based linked views. Visual storytelling is a method for providing a guided but interactive process of visualizing data, allowing more engaging data exploration through interactive web-enabled visualizations. Within storytelling, a snapshot mechanism helps the author of a story to highlight data views of particular interest and subsequently share or guide others within the data analysis process. This allows a particular person to select relevant attributes for a snapshot, such as highlighted regions for comparisons, time step, class values for colour legend, etc. and provide a snapshot of the current application state, which can then be provided as a hyperlink and recreated by someone else. Since data can be embedded within this snapshot, it is possible to interactively visualize and manipulate it. The second technique, web-based linked views, includes multiple windows which interactively respond to the user selections, so that when selecting an object and changing it one window, it will automatically update in all the other windows. These concepts can be part of a collaborative platform, where multiple people share and work together on the data, via online access, which also allows its remote usage from a mobile platform. Storytelling augments analysis and decision-making capabilities allowing to assimilate complex situations and reach informed decisions, in addition to helping the public visualize information. In our visualization scenario, developed in the context of the VA-4D project for the European Space Agency (see http://www.ca3-uninova.org/project_va4d), we make use of the GAV (GeoAnalytics Visualization) framework, a web-oriented visual analytics application based on multiple interactive views. The final visualization that we produce includes multiple interactive views, including a dynamic multi-layer map surrounded by other visualizations such as bar charts, time graphs and scatter plots. The map provides flood and building information, on top of a base city map (street maps and/or satellite imagery provided by online map services such as Google Maps, Bing Maps etc.). Damage over time for selected buildings, damage for all buildings at a chosen time period, correlation between damage and water depth can be analysed in the other views. This interactive web-based visualization that incorporates the ideas of storytelling, web-based linked views, and other visualization techniques, for a 4 hour flood event in Lisbon in 2010, can be found online at http://www.ncomva.se/flash/projects/esa/flooding/.

  11. Methods and apparatus for graphical display and editing of flight plans

    NASA Technical Reports Server (NTRS)

    Gibbs, Michael J. (Inventor); Adams, Jr., Mike B. (Inventor); Chase, Karl L. (Inventor); Lewis, Daniel E. (Inventor); McCrobie, Daniel E. (Inventor); Omen, Debi Van (Inventor)

    2002-01-01

    Systems and methods are provided for an integrated graphical user interface which facilitates the display and editing of aircraft flight-plan data. A user (e.g., a pilot) located within the aircraft provides input to a processor through a cursor control device and receives visual feedback via a display produced by a monitor. The display includes various graphical elements associated with the lateral position, vertical position, flight-plan and/or other indicia of the aircraft's operational state as determined from avionics data and/or various data sources. Through use of the cursor control device, the user may modify the flight-plan and/or other such indicia graphically in accordance with feedback provided by the display. In one embodiment, the display includes a lateral view, a vertical profile view, and a hot-map view configured to simplify the display and editing of the aircraft's flight-plan data.

  12. 3D Flow visualization in virtual reality

    NASA Astrophysics Data System (ADS)

    Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa

    2017-11-01

    By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.

  13. Rapid P300 brain-computer interface communication with a head-mounted display

    PubMed Central

    Käthner, Ivo; Kübler, Andrea; Halder, Sebastian

    2015-01-01

    Visual ERP (P300) based brain-computer interfaces (BCIs) allow for fast and reliable spelling and are intended as a muscle-independent communication channel for people with severe paralysis. However, they require the presentation of visual stimuli in the field of view of the user. A head-mounted display could allow convenient presentation of visual stimuli in situations, where mounting a conventional monitor might be difficult or not feasible (e.g., at a patient's bedside). To explore if similar accuracies can be achieved with a virtual reality (VR) headset compared to a conventional flat screen monitor, we conducted an experiment with 18 healthy participants. We also evaluated it with a person in the locked-in state (LIS) to verify that usage of the headset is possible for a severely paralyzed person. Healthy participants performed online spelling with three different display methods. In one condition a 5 × 5 letter matrix was presented on a conventional 22 inch TFT monitor. Two configurations of the VR headset were tested. In the first (glasses A), the same 5 × 5 matrix filled the field of view of the user. In the second (glasses B), single letters of the matrix filled the field of view of the user. The participant in the LIS tested the VR headset on three different occasions (glasses A condition only). For healthy participants, average online spelling accuracies were 94% (15.5 bits/min) using three flash sequences for spelling with the monitor and glasses A and 96% (16.2 bits/min) with glasses B. In one session, the participant in the LIS reached an online spelling accuracy of 100% (10 bits/min) using the glasses A condition. We also demonstrated that spelling with one flash sequence is possible with the VR headset for healthy users (mean: 32.1 bits/min, maximum reached by one user: 71.89 bits/min at 100% accuracy). We conclude that the VR headset allows for rapid P300 BCI communication in healthy users and may be a suitable display option for severely paralyzed persons. PMID:26097447

  14. Interactive Visualization of Large-Scale Hydrological Data using Emerging Technologies in Web Systems and Parallel Programming

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2013-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools developed within the light of these challenges.

  15. High End Visualization of Geophysical Datasets Using Immersive Technology: The SIO Visualization Center.

    NASA Astrophysics Data System (ADS)

    Newman, R. L.

    2002-12-01

    How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.

  16. User's Guide for MetView: A Meteorological Display and Assessment Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glantz, Clifford S.; Pelton, Mitchell A.; Allwine, K Jerry

    2000-09-27

    MetView Version 2.0 is an easy-to-use model for accessing, viewing, and analyzing meteorological data. MetView provides both graphical and numerical displays of data. It can accommodate data from an extensive meteorological monitoring network that includes near-surface monitoring locations, instrumented towers, sodars, and meteorologist observations. MetView is used operationally for both routine, emergency response, and research applications at the U.S. Department of Energy's Hanford Site. At the Site's Emergency Operations Center, MetView aids in the access, visualization, and interpretation of real-time meteorological data. Historical data can also be accessed and displayed. Emergency response personnel at the Emergency Operations Center use MetViewmore » products in the formulation of protective action recommendations and other decisions. In the initial stage of an emergency, MetView can be operated using a very simple, five-step procedure. This first-responder procedure allows non-technical staff to rapidly generate meteorological products and disseminate key information. After first-responder information products are produced, the Emergency Operations Center's technical staff can conduct more sophisticated analyses using the model. This may include examining the vertical variation in winds, assessing recent changes in atmospheric conditions, evaluating atmospheric mixing rates, and forecasting changes in meteorological conditions. This user's guide provides easy-to-follow instructions for both first-responder and routine operation of the model. Examples, with explanations, are provided for each type of MetView output display. Information is provided on the naming convention, format, and contents of each type of meteorological data file used by the model area. This user's guide serves as a ready reference for experienced MetView users and a training manual for new users.« less

  17. EEGVIS: A MATLAB Toolbox for Browsing, Exploring, and Viewing Large Datasets.

    PubMed

    Robbins, Kay A

    2012-01-01

    Recent advances in data monitoring and sensor technology have accelerated the acquisition of very large data sets. Streaming data sets from instrumentation such as multi-channel EEG recording usually must undergo substantial pre-processing and artifact removal. Even when using automated procedures, most scientists engage in laborious manual examination and processing to assure high quality data and to indentify interesting or problematic data segments. Researchers also do not have a convenient method of method of visually assessing the effects of applying any stage in a processing pipeline. EEGVIS is a MATLAB toolbox that allows users to quickly explore multi-channel EEG and other large array-based data sets using multi-scale drill-down techniques. Customizable summary views reveal potentially interesting sections of data, which users can explore further by clicking to examine using detailed viewing components. The viewer and a companion browser are built on our MoBBED framework, which has a library of modular viewing components that can be mixed and matched to best reveal structure. Users can easily create new viewers for their specific data without any programming during the exploration process. These viewers automatically support pan, zoom, resizing of individual components, and cursor exploration. The toolbox can be used directly in MATLAB at any stage in a processing pipeline, as a plug-in for EEGLAB, or as a standalone precompiled application without MATLAB running. EEGVIS and its supporting packages are freely available under the GNU general public license at http://visual.cs.utsa.edu/eegvis.

  18. Falcon: A Temporal Visual Analysis System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A.

    2016-09-05

    Flexible visible exploration of long, high-resolution time series from multiple sensor streams is a challenge in several domains. Falcon is a visual analytics approach that helps researchers acquire a deep understanding of patterns in log and imagery data. Falcon allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations with multiple levels of detail. These capabilities are applicable to the analysis of any quantitative time series.

  19. Design in mind: eliciting service user and frontline staff perspectives on psychiatric ward design through participatory methods.

    PubMed

    Csipke, Emese; Papoulias, Constantina; Vitoratou, Silia; Williams, Paul; Rose, Diana; Wykes, Til

    2016-01-01

    Psychiatric ward design may make an important contribution to patient outcomes and well-being. However, research is hampered by an inability to assess its effects robustly. This paper reports on a study which deployed innovative methods to capture service user and staff perceptions of ward design. User generated measures of the impact of ward design were developed and tested on four acute adult wards using participatory methodology. Additionally, inpatients took photographs to illustrate their experience of the space in two wards. Data were compared across wards. Satisfactory reliability indices emerged based on both service user and staff responses. Black and minority ethnic (BME) service users and those with a psychosis spectrum diagnosis have more positive views of the ward layout and fixtures. Staff members have more positive views than service users, while priorities of staff and service users differ. Inpatient photographs prioritise hygiene, privacy and control and address symbolic aspects of the ward environment. Participatory and visual methodologies can provide robust tools for an evaluation of the impact of psychiatric ward design on users.

  20. Design in mind: eliciting service user and frontline staff perspectives on psychiatric ward design through participatory methods

    PubMed Central

    Csipke, Emese; Papoulias, Constantina; Vitoratou, Silia; Williams, Paul; Rose, Diana; Wykes, Til

    2016-01-01

    Abstract Background: Psychiatric ward design may make an important contribution to patient outcomes and well-being. However, research is hampered by an inability to assess its effects robustly. This paper reports on a study which deployed innovative methods to capture service user and staff perceptions of ward design. Method: User generated measures of the impact of ward design were developed and tested on four acute adult wards using participatory methodology. Additionally, inpatients took photographs to illustrate their experience of the space in two wards. Data were compared across wards. Results: Satisfactory reliability indices emerged based on both service user and staff responses. Black and minority ethnic (BME) service users and those with a psychosis spectrum diagnosis have more positive views of the ward layout and fixtures. Staff members have more positive views than service users, while priorities of staff and service users differ. Inpatient photographs prioritise hygiene, privacy and control and address symbolic aspects of the ward environment. Conclusions: Participatory and visual methodologies can provide robust tools for an evaluation of the impact of psychiatric ward design on users. PMID:26886239

  1. Analysis of User Interaction with a Brain-Computer Interface Based on Steady-State Visually Evoked Potentials: Case Study of a Game

    PubMed Central

    de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares

    2018-01-01

    This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named “Get Coins,” through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user. PMID:29849549

  2. Analysis of User Interaction with a Brain-Computer Interface Based on Steady-State Visually Evoked Potentials: Case Study of a Game.

    PubMed

    Leite, Harlei Miguel de Arruda; de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares

    2018-01-01

    This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named "Get Coins," through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user.

  3. Pinyon, Version 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Logan; Hackenberg, Robert

    2017-02-13

    Pinyon is a tool that stores steps involved in creating a model derived from a collection of data. The main function of Pinyon is to store descriptions of calculations used to analyze or visualize the data in a database, and allow users to view the results of these calculations via a web interface. Additionally, users may also use the web interface to make adjustments to the calculations and rerun the entire collection of analysis steps automatically.

  4. Molecular Dynamics Visualization (MDV): Stereoscopic 3D Display of Biomolecular Structure and Interactions Using the Unity Game Engine.

    PubMed

    Wiebrands, Michael; Malajczuk, Chris J; Woods, Andrew J; Rohl, Andrew L; Mancera, Ricardo L

    2018-06-21

    Molecular graphics systems are visualization tools which, upon integration into a 3D immersive environment, provide a unique virtual reality experience for research and teaching of biomolecular structure, function and interactions. We have developed a molecular structure and dynamics application, the Molecular Dynamics Visualization tool, that uses the Unity game engine combined with large scale, multi-user, stereoscopic visualization systems to deliver an immersive display experience, particularly with a large cylindrical projection display. The application is structured to separate the biomolecular modeling and visualization systems. The biomolecular model loading and analysis system was developed as a stand-alone C# library and provides the foundation for the custom visualization system built in Unity. All visual models displayed within the tool are generated using Unity-based procedural mesh building routines. A 3D user interface was built to allow seamless dynamic interaction with the model while being viewed in 3D space. Biomolecular structure analysis and display capabilities are exemplified with a range of complex systems involving cell membranes, protein folding and lipid droplets.

  5. Teaching Tectonics to Undergraduates with Web GIS

    NASA Astrophysics Data System (ADS)

    Anastasio, D. J.; Bodzin, A.; Sahagian, D. L.; Rutzmoser, S.

    2013-12-01

    Geospatial reasoning skills provide a means for manipulating, interpreting, and explaining structured information and are involved in higher-order cognitive processes that include problem solving and decision-making. Appropriately designed tools, technologies, and curriculum can support spatial learning. We present Web-based visualization and analysis tools developed with Javascript APIs to enhance tectonic curricula while promoting geospatial thinking and scientific inquiry. The Web GIS interface integrates graphics, multimedia, and animations that allow users to explore and discover geospatial patterns that are not easily recognized. Features include a swipe tool that enables users to see underneath layers, query tools useful in exploration of earthquake and volcano data sets, a subduction and elevation profile tool which facilitates visualization between map and cross-sectional views, drafting tools, a location function, and interactive image dragging functionality on the Web GIS. The Web GIS platform is independent and can be implemented on tablets or computers. The GIS tool set enables learners to view, manipulate, and analyze rich data sets from local to global scales, including such data as geology, population, heat flow, land cover, seismic hazards, fault zones, continental boundaries, and elevation using two- and three- dimensional visualization and analytical software. Coverages which allow users to explore plate boundaries and global heat flow processes aided learning in a Lehigh University Earth and environmental science Structural Geology and Tectonics class and are freely available on the Web.

  6. The Ocean Observatories Initiative: Data Access and Visualization via the Graphical User Interface

    NASA Astrophysics Data System (ADS)

    Garzio, L. M.; Belabbassi, L.; Knuth, F.; Smith, M. J.; Crowley, M. F.; Vardaro, M.; Kerfoot, J.

    2016-02-01

    The Ocean Observatories Initiative (OOI), funded by the National Science Foundation, is a broad-scale, multidisciplinary effort to transform oceanographic research by providing users with real-time access to long-term datasets from a variety of deployed physical, chemical, biological, and geological sensors. The global array component of the OOI includes four high latitude sites: Irminger Sea off Greenland, Station Papa in the Gulf of Alaska, Argentine Basin off the coast of Argentina, and Southern Ocean near coordinates 55°S and 90°W. Each site is composed of fixed moorings, hybrid profiler moorings and mobile assets, with a total of approximately 110 instruments at each site. Near real-time (telemetered) and recovered data from these instruments can be visualized and downloaded via the OOI Graphical User Interface. In this Interface, the user can visualize scientific parameters via six different plotting functions with options to specify time ranges and apply various QA/QC tests. Data streams from all instruments can also be downloaded in different formats (CSV, JSON, and NetCDF) for further data processing, visualization, and comparison to supplementary datasets. In addition, users can view alerts and alarms in the system, access relevant metadata and deployment information for specific instruments, and find infrastructure specifics for each array including location, sampling strategies, deployment schedules, and technical drawings. These datasets from the OOI provide an unprecedented opportunity to transform oceanographic research and education, and will be readily accessible to the general public via the OOI's Graphical User Interface.

  7. KinView: A visual comparative sequence analysis tool for integrated kinome research

    PubMed Central

    McSkimming, Daniel Ian; Dastgheib, Shima; Baffi, Timothy R.; Byrne, Dominic P.; Ferries, Samantha; Scott, Steven Thomas; Newton, Alexandra C.; Eyers, Claire E.; Kochut, Krzysztof J.; Eyers, Patrick A.

    2017-01-01

    Multiple sequence alignments (MSAs) are a fundamental analysis tool used throughout biology to investigate relationships between protein sequence, structure, function, evolutionary history, and patterns of disease-associated variants. However, their widespread application in systems biology research is currently hindered by the lack of user-friendly tools to simultaneously visualize, manipulate and query the information conceptualized in large sequence alignments, and the challenges in integrating MSAs with multiple orthogonal data such as cancer variants and post-translational modifications, which are often stored in heterogeneous data sources and formats. Here, we present the Multiple Sequence Alignment Ontology (MSAOnt), which represents a profile or consensus alignment in an ontological format. Subsets of the alignment are easily selected through the SPARQL Protocol and RDF Query Language for downstream statistical analysis or visualization. We have also created the Kinome Viewer (KinView), an interactive integrative visualization that places eukaryotic protein kinase cancer variants in the context of natural sequence variation and experimentally determined post-translational modifications, which play central roles in the regulation of cellular signaling pathways. Using KinView, we identified differential phosphorylation patterns between tyrosine and serine/threonine kinases in the activation segment, a major kinase regulatory region that is often mutated in proliferative diseases. We discuss cancer variants that disrupt phosphorylation sites in the activation segment, and show how KinView can be used as a comparative tool to identify differences and similarities in natural variation, cancer variants and post-translational modifications between kinase groups, families and subfamilies. Based on KinView comparisons, we identify and experimentally characterize a regulatory tyrosine (Y177PLK4) in the PLK4 C-terminal activation segment region termed the P+1 loop. To further demonstrate the application of KinView in hypothesis generation and testing, we formulate and validate a hypothesis explaining a novel predicted loss-of-function variant (D523NPKCβ) in the regulatory spine of PKCβ, a recently identified tumor suppressor kinase. KinView provides a novel, extensible interface for performing comparative analyses between subsets of kinases and for integrating multiple types of residue specific annotations in user friendly formats. PMID:27731453

  8. Visualizing Mobility of Public Transportation System.

    PubMed

    Zeng, Wei; Fu, Chi-Wing; Arisona, Stefan Müller; Erath, Alexander; Qu, Huamin

    2014-12-01

    Public transportation systems (PTSs) play an important role in modern cities, providing shared/massive transportation services that are essential for the general public. However, due to their increasing complexity, designing effective methods to visualize and explore PTS is highly challenging. Most existing techniques employ network visualization methods and focus on showing the network topology across stops while ignoring various mobility-related factors such as riding time, transfer time, waiting time, and round-the-clock patterns. This work aims to visualize and explore passenger mobility in a PTS with a family of analytical tasks based on inputs from transportation researchers. After exploring different design alternatives, we come up with an integrated solution with three visualization modules: isochrone map view for geographical information, isotime flow map view for effective temporal information comparison and manipulation, and OD-pair journey view for detailed visual analysis of mobility factors along routes between specific origin-destination pairs. The isotime flow map linearizes a flow map into a parallel isoline representation, maximizing the visualization of mobility information along the horizontal time axis while presenting clear and smooth pathways from origin to destinations. Moreover, we devise several interactive visual query methods for users to easily explore the dynamics of PTS mobility over space and time. Lastly, we also construct a PTS mobility model from millions of real passenger trajectories, and evaluate our visualization techniques with assorted case studies with the transportation researchers.

  9. Interactive exploration of surveillance video through action shot summarization and trajectory visualization.

    PubMed

    Meghdadi, Amir H; Irani, Pourang

    2013-12-01

    We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems.

  10. Integrating multi-view transmission system into MPEG-21 stereoscopic and multi-view DIA (digital item adaptation)

    NASA Astrophysics Data System (ADS)

    Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran

    2006-10-01

    As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To solve for the former, IVR (intermediate view reconstruction) is employed for smooth transition between two stereoscopic view sequences. As well, a disparity adjustment scheme is used for the latter. Finally, from the implementation of testbed and the experiments, we can show the valuables and possibilities of our system.

  11. Comparing two types of engineering visualizations: task-related manipulations matter.

    PubMed

    Cölln, Martin C; Kusch, Kerstin; Helmert, Jens R; Kohler, Petra; Velichkovsky, Boris M; Pannasch, Sebastian

    2012-01-01

    This study focuses on the comparison of traditional engineering drawings with a CAD (computer aided design) visualization in terms of user performance and eye movements in an applied context. Twenty-five students of mechanical engineering completed search tasks for measures in two distinct depictions of a car engine component (engineering drawing vs. CAD model). Besides spatial dimensionality, the display types most notably differed in terms of information layout, access and interaction options. The CAD visualization yielded better performance, if users directly manipulated the object, but was inferior, if employed in a conventional static manner, i.e. inspecting only predefined views. An additional eye movement analysis revealed longer fixation durations and a stronger increase of task-relevant fixations over time when interacting with the CAD visualization. This suggests a more focused extraction and filtering of information. We conclude that the three-dimensional CAD visualization can be advantageous if its ability to manipulate is used. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  13. Trade Space Specification Tool (TSST) for Rapid Mission Architecture (Version 1.2)

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Schrock, Mitchell; Borden, Chester S.; Moeller, Robert C.

    2013-01-01

    Trade Space Specification Tool (TSST) is designed to capture quickly ideas in the early spacecraft and mission architecture design and categorize them into trade space dimensions and options for later analysis. It is implemented as an Eclipse RCP Application, which can be run as a standalone program. Users rapidly create concept items with single clicks on a graphical canvas, and can organize and create linkages between the ideas using drag-and-drop actions within the same graphical view. Various views such as a trade view, rules view, and architecture view are provided to help users to visualize the trade space. This software can identify, explore, and assess aspects of the mission trade space, as well as capture and organize linkages/dependencies between trade space components. The tool supports a user-in-the-loop preliminary logical examination and filtering of trade space options to help identify which paths in the trade space are feasible (and preferred) and what analyses need to be done later with executable models. This tool provides multiple user views of the trade space to guide the analyst/team to facilitate interpretation and communication of the trade space components and linkages, identify gaps in combining and selecting trade space options, and guide user decision-making for which combinations of architectural options should be pursued for further evaluation. This software provides an environment to capture mission trade space elements rapidly and assist users for their architecture analysis. This is primarily focused on mission and spacecraft architecture design, rather than general-purpose design application. In addition, it provides more flexibility to create concepts and organize the ideas. The software is developed as an Eclipse plug-in and potentially can be integrated with other Eclipse-based tools.

  14. JAva GUi for Applied Research (JAGUAR) v 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JAGUAR is a Java software tool for automatically rendering a graphical user interface (GUI) from a structured input specification. It is designed as a plug-in to the Eclipse workbench to enable users to create, edit, and externally execute analysis application input decks and then view the results. JAGUAR serves as a GUI for Sandia's DAKOTA software toolkit for optimization and uncertainty quantification. It will include problem (input deck)set-up, option specification, analysis execution, and results visualization. Through the use of wizards, templates, and views, JAGUAR helps uses navigate the complexity of DAKOTA's complete input specification. JAGUAR is implemented in Java, leveragingmore » Eclipse extension points and Eclipse user interface. JAGUAR parses a DAKOTA NIDR input specification and presents the user with linked graphical and plain text representations of problem set-up and option specification for DAKOTA studies. After the data has been input by the user, JAGUAR generates one or more input files for DAKOTA, executes DAKOTA, and captures and interprets the results« less

  15. xiSPEC: web-based visualization, analysis and sharing of proteomics data.

    PubMed

    Kolbowski, Lars; Combe, Colin; Rappsilber, Juri

    2018-05-08

    We present xiSPEC, a standard compliant, next-generation web-based spectrum viewer for visualizing, analyzing and sharing mass spectrometry data. Peptide-spectrum matches from standard proteomics and cross-linking experiments are supported. xiSPEC is to date the only browser-based tool supporting the standardized file formats mzML and mzIdentML defined by the proteomics standards initiative. Users can either upload data directly or select files from the PRIDE data repository as input. xiSPEC allows users to save and share their datasets publicly or password protected for providing access to collaborators or readers and reviewers of manuscripts. The identification table features advanced interaction controls and spectra are presented in three interconnected views: (i) annotated mass spectrum, (ii) peptide sequence fragmentation key and (iii) quality control error plots of matched fragments. Highlighting or selecting data points in any view is represented in all other views. Views are interactive scalable vector graphic elements, which can be exported, e.g. for use in publication. xiSPEC allows for re-annotation of spectra for easy hypothesis testing by modifying input data. xiSPEC is freely accessible at http://spectrumviewer.org and the source code is openly available on https://github.com/Rappsilber-Laboratory/xiSPEC.

  16. SeeGH--a software tool for visualization of whole genome array comparative genomic hybridization data.

    PubMed

    Chi, Bryan; DeLeeuw, Ronald J; Coe, Bradley P; MacAulay, Calum; Lam, Wan L

    2004-02-09

    Array comparative genomic hybridization (CGH) is a technique which detects copy number differences in DNA segments. Complete sequencing of the human genome and the development of an array representing a tiling set of tens of thousands of DNA segments spanning the entire human genome has made high resolution copy number analysis throughout the genome possible. Since array CGH provides signal ratio for each DNA segment, visualization would require the reassembly of individual data points into chromosome profiles. We have developed a visualization tool for displaying whole genome array CGH data in the context of chromosomal location. SeeGH is an application that translates spot signal ratio data from array CGH experiments to displays of high resolution chromosome profiles. Data is imported from a simple tab delimited text file obtained from standard microarray image analysis software. SeeGH processes the signal ratio data and graphically displays it in a conventional CGH karyotype diagram with the added features of magnification and DNA segment annotation. In this process, SeeGH imports the data into a database, calculates the average ratio and standard deviation for each replicate spot, and links them to chromosome regions for graphical display. Once the data is displayed, users have the option of hiding or flagging DNA segments based on user defined criteria, and retrieve annotation information such as clone name, NCBI sequence accession number, ratio, base pair position on the chromosome, and standard deviation. SeeGH represents a novel software tool used to view and analyze array CGH data. The software gives users the ability to view the data in an overall genomic view as well as magnify specific chromosomal regions facilitating the precise localization of genetic alterations. SeeGH is easily installed and runs on Microsoft Windows 2000 or later environments.

  17. Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom

    NASA Astrophysics Data System (ADS)

    Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.

    2001-05-01

    We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.

  18. Water tunnel flow visualization using a laser

    NASA Technical Reports Server (NTRS)

    Beckner, C.; Curry, R. E.

    1985-01-01

    Laser systems for flow visualization in water tunnels (similar to the vapor screen technique used in wind tunnels) can provide two-dimensional cross-sectional views of complex flow fields. This parametric study documents the practical application of the laser-enhanced visualization (LEV) technique to water tunnel testing. Aspects of the study include laser power levels, flow seeding (using flourescent dyes and embedded particulates), model preparation, and photographic techniques. The results of this study are discussed to provide potential users with basic information to aid in the design and setup of an LEV system.

  19. How Formal Dynamic Verification Tools Facilitate Novel Concurrency Visualizations

    NASA Astrophysics Data System (ADS)

    Aananthakrishnan, Sriram; Delisi, Michael; Vakkalanka, Sarvani; Vo, Anh; Gopalakrishnan, Ganesh; Kirby, Robert M.; Thakur, Rajeev

    With the exploding scale of concurrency, presenting valuable pieces of information collected by formal verification tools intuitively and graphically can greatly enhance concurrent system debugging. Traditional MPI program debuggers present trace views of MPI program executions. Such views are redundant, often containing equivalent traces that permute independent MPI calls. In our ISP formal dynamic verifier for MPI programs, we present a collection of alternate views made possible by the use of formal dynamic verification. Some of ISP’s views help pinpoint errors, some facilitate discerning errors by eliminating redundancy, while others help understand the program better by displaying concurrent even orderings that must be respected by all MPI implementations, in the form of completes-before graphs. In this paper, we describe ISP’s graphical user interface (GUI) capabilities in all these areas which are currently supported by a portable Java based GUI, a Microsoft Visual Studio GUI, and an Eclipse based GUI whose development is in progress.

  20. MaROS: Web Visualization of Mars Orbiting and Landed Assets

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Hy, Franklin H.

    2011-01-01

    Mars Relay operations currently involve several e-mails and phone calls between lander and orbiter teams in order to settle on an agreed time for performing a communication pass between the landed asset (i.e. rover or lander) and orbiter, then back to Earth. This new application aims to reduce this complexity by presenting a visualization of the overpass time ranges and elevation angle, as well as other information. The user is able to select a specific overflight opportunity to receive further information about that particular pass. This software presents a unified view of the potential communication passes available between orbiting and landed assets on Mars. Each asset is presented to the user in a graphical view showing overpass opportunities, elevation angle, requested and acknowledged communication windows, forward and back latencies, warnings, conflicts, relative planetary times, ACE Schedules, and DSN information. This software is unique in that it is the first of its kind to visually display the information regarding communication opportunities between landed and orbiting Mars assets. The software is written using ActionScript/FLEX, a Web language, meaning that this information may be accessed over the Internet from anywhere in the world.

  1. Door and window image-based measurement using a mobile device

    NASA Astrophysics Data System (ADS)

    Ma, Guangyao; Janakaraj, Manishankar; Agam, Gady

    2015-03-01

    We present a system for door and window image-based measurement using an Android mobile device. In this system a user takes an image of a door or window that needs to be measured and using interaction measures specific dimensions of the object. The existing object is removed from the image and a 3D model of a replacement is rendered onto the image. The visualization provides a 3D model with which the user can interact. When tested on a mobile Android platform with an 8MP camera we obtain an average measurement error of roughly 0.5%. This error rate is stable across a range of view angles, distances from the object, and image resolutions. The main advantages of our mobile device application for image measurement include measuring objects for which physical access is not readily available, documenting in a precise manner the locations in the scene where the measurements were taken, and visualizing a new object with custom selections inside the original view.

  2. A WebGL Tool for Visualizing the Topology of the Sun's Coronal Magnetic Field

    NASA Astrophysics Data System (ADS)

    Duffy, A.; Cheung, C.; DeRosa, M. L.

    2012-12-01

    We present a web-based, topology-viewing tool that allows users to visualize the geometry and topology of the Sun's 3D coronal magnetic field in an interactive manner. The tool is implemented using, open-source, mature, modern web technologies including WebGL, jQuery, HTML 5, and CSS 3, which are compatible with nearly all modern web browsers. As opposed to the traditional method of visualization, which involves the downloading and setup of various software packages-proprietary and otherwise-the tool presents a clean interface that allows the user to easily load and manipulate the model, while also offering great power to choose which topological features are displayed. The tool accepts data encoded in the JSON open format that has libraries available for nearly every major programming language, making it simple to generate the data.

  3. Analysis of brain activity and response during monoscopic and stereoscopic visualization

    NASA Astrophysics Data System (ADS)

    Calore, Enrico; Folgieri, Raffaella; Gadia, Davide; Marini, Daniele

    2012-03-01

    Stereoscopic visualization in cinematography and Virtual Reality (VR) creates an illusion of depth by means of two bidimensional images corresponding to different views of a scene. This perceptual trick is used to enhance the emotional response and the sense of presence and immersivity of the observers. An interesting question is if and how it is possible to measure and analyze the level of emotional involvement and attention of the observers during a stereoscopic visualization of a movie or of a virtual environment. The research aims represent a challenge, due to the large number of sensorial, physiological and cognitive stimuli involved. In this paper we begin this research by analyzing possible differences in the brain activity of subjects during the viewing of monoscopic or stereoscopic contents. To this aim, we have performed some preliminary experiments collecting electroencephalographic (EEG) data of a group of users using a Brain- Computer Interface (BCI) during the viewing of stereoscopic and monoscopic short movies in a VR immersive installation.

  4. VISIBIOweb: visualization and layout services for BioPAX pathway models

    PubMed Central

    Dilek, Alptug; Belviranli, Mehmet E.; Dogrusoz, Ugur

    2010-01-01

    With recent advancements in techniques for cellular data acquisition, information on cellular processes has been increasing at a dramatic rate. Visualization is critical to analyzing and interpreting complex information; representing cellular processes or pathways is no exception. VISIBIOweb is a free, open-source, web-based pathway visualization and layout service for pathway models in BioPAX format. With VISIBIOweb, one can obtain well-laid-out views of pathway models using the standard notation of the Systems Biology Graphical Notation (SBGN), and can embed such views within one's web pages as desired. Pathway views may be navigated using zoom and scroll tools; pathway object properties, including any external database references available in the data, may be inspected interactively. The automatic layout component of VISIBIOweb may also be accessed programmatically from other tools using Hypertext Transfer Protocol (HTTP). The web site is free and open to all users and there is no login requirement. It is available at: http://visibioweb.patika.org. PMID:20460470

  5. The application of NASCAD as a NASTRAN pre- and post-processor

    NASA Technical Reports Server (NTRS)

    Peltzman, Alan N.

    1987-01-01

    The NASA Computer Aided Design (NASCAD) graphics package provides an effective way to interactively create, view, and refine analytic data models. NASCAD's macro language, combined with its powerful 3-D geometric data base allows the user important flexibility and speed in constructing his model. This flexibility has the added benefit of enabling the user to keep pace with any new NASTRAN developments. NASCAD allows models to be conveniently viewed and plotted to best advantage in both pre- and post-process phases of development, providing useful visual feedback to the analysis process. NASCAD, used as a graphics compliment to NASTRAN, can play a valuable role in the process of finite element modeling.

  6. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  7. Genomicus 2018: karyotype evolutionary trees and on-the-fly synteny computing

    PubMed Central

    Nguyen, Nga Thi Thuy; Vincens, Pierre

    2018-01-01

    Abstract Since 2010, the Genomicus web server is available online at http://genomicus.biologie.ens.fr/genomicus. This graphical browser provides access to comparative genomic analyses in four different phyla (Vertebrate, Plants, Fungi, and non vertebrate Metazoans). Users can analyse genomic information from extant species, as well as ancestral gene content and gene order for vertebrates and flowering plants, in an integrated evolutionary context. New analyses and visualization tools have recently been implemented in Genomicus Vertebrate. Karyotype structures from several genomes can now be compared along an evolutionary pathway (Multi-KaryotypeView), and synteny blocks can be computed and visualized between any two genomes (PhylDiagView). PMID:29087490

  8. Situated cognition in clinical visualization: the role of transparency in GammaKnife neurosurgery planning.

    PubMed

    Dinka, David; Nyce, James M; Timpka, Toomas

    2009-06-01

    The aim of this study was to investigate how the clinical use of visualization technology can be advanced by the application of a situated cognition perspective. The data were collected in the GammaKnife radiosurgery setting and analyzed using qualitative methods. Observations and in-depth interviews with neurosurgeons and physicists were performed at three clinics using the Leksell GammaKnife. The users' ability to perform cognitive tasks was found to be reduced each time visualizations incongruent with the particular user's perception of clinical reality were used. The main issue here was a lack of transparency, i.e. a black box problem where machine representations "stood between" users and the cognitive tasks they wanted to perform. For neurosurgeons, transparency meant their previous experience from traditional surgery could be applied, i.e. that they were not forced to perform additional cognitive work. From the view of the physicists, on the other hand, the concept of transparency was associated with mathematical precision and avoiding creating a cognitive distance between basic patient data and what is experienced as clinical reality. The physicists approached clinical visualization technology as though it was a laboratory apparatus--one that required continual adjustment and assessment in order to "capture" a quantitative clinical reality. Designers of visualization technology need to compare the cognitive interpretations generated by the new visualization systems to conceptions generated during "traditional" clinical work. This means that the viewpoint of different clinical user groups involved in a given clinical task would have to be taken into account as well. A way forward would be to acknowledge that visualization is a socio-cognitive function that has practice-based antecedents and consequences, and to reconsider what analytical and scientific challenges this presents us with.

  9. Analyzing structural changes in SNOMED CT's Bacterial infectious diseases using a visual semantic delta.

    PubMed

    Ochs, Christopher; Case, James T; Perl, Yehoshua

    2017-03-01

    Thousands of changes are applied to SNOMED CT's concepts during each release cycle. These changes are the result of efforts to improve or expand the coverage of health domains in the terminology. Understanding which concepts changed, how they changed, and the overall impact of a set of changes is important for editors and end users. Each SNOMED CT release comes with delta files, which identify all of the individual additions and removals of concepts and relationships. These files typically contain tens of thousands of individual entries, overwhelming users. They also do not identify the editorial processes that were applied to individual concepts and they do not capture the overall impact of a set of changes on a subhierarchy of concepts. In this paper we introduce a methodology and accompanying software tool called a SNOMED CT Visual Semantic Delta ("semantic delta" for short) to enable a comprehensive review of changes in SNOMED CT. The semantic delta displays a graphical list of editing operations that provides semantics and context to the additions and removals in the delta files. However, there may still be thousands of editing operations applied to a set of concepts. To address this issue, a semantic delta includes a visual summary of changes that affected sets of structurally and semantically similar concepts. The software tool for creating semantic deltas offers views of various granularities, allowing a user to control how much change information they view. In this tool a user can select a set of structurally and semantically similar concepts and review the editing operations that affected their modeling. The semantic delta methodology is demonstrated on SNOMED CT's Bacterial infectious disease subhierarchy, which has undergone a significant remodeling effort over the last two years. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Analyzing Structural Changes in SNOMED CT’s Bacterial Infectious Diseases Using a Visual Semantic Delta

    PubMed Central

    Ochs, Christopher; Case, James T.; Perl, Yehoshua

    2017-01-01

    Thousands of changes are applied to SNOMED CT’s concepts during each release cycle. These changes are the result of efforts to improve or expand the coverage of health domains in the terminology. Understanding which concepts changed, how they changed, and the overall impact of a set of changes is important for editors and end users. Each SNOMED CT release comes with delta files, which identify all of the individual additions and removals of concepts and relationships. These files typically contain tens of thousands of individual entries, overwhelming users. They also do not identify the editorial processes that were applied to individual concepts and they do not capture the overall impact of a set of changes on a subhierarchy of concepts. In this paper we introduce a methodology and accompanying software tool called a SNOMED CT Visual Semantic Delta (“semantic delta” for short) to enable a comprehensive review of changes in SNOMED CT. The semantic delta displays a graphical list of editing operations that provides semantics and context to the additions and removals in the delta files. However, there may still be thousands of editing operations applied to a set of concepts. To address this issue, a semantic delta includes a visual summary of changes that affected sets of structurally and semantically similar concepts. The software tool for creating semantic deltas offers views of various granularities, allowing a user to control how much change information they view. In this tool a user can select a set of structurally and semantically similar concepts and review the editing operations that affected their modeling. The semantic delta methodology is demonstrated on SNOMED CT’s Bacterial infectious disease subhierarchy, which has undergone a significant remodeling effort over the last two years. PMID:28215561

  11. Three-Dimensional User Interfaces for Immersive Virtual Reality

    NASA Technical Reports Server (NTRS)

    vanDam, Andries

    1997-01-01

    The focus of this grant was to experiment with novel user interfaces for immersive Virtual Reality (VR) systems, and thus to advance the state of the art of user interface technology for this domain. Our primary test application was a scientific visualization application for viewing Computational Fluid Dynamics (CFD) datasets. This technology has been transferred to NASA via periodic status reports and papers relating to this grant that have been published in conference proceedings. This final report summarizes the research completed over the past year, and extends last year's final report of the first three years of the grant.

  12. Instant Gratification: Striking a Balance Between Rich Interactive Visualization and Ease of Use for Casual Web Surfers

    NASA Astrophysics Data System (ADS)

    Russell, R. M.; Johnson, R. M.; Gardiner, E. S.; Bergman, J. J.; Genyuk, J.; Henderson, S.

    2004-12-01

    Interactive visualizations can be powerful tools for helping students, teachers, and the general public comprehend significant features in rich datasets and complex systems. Successful use of such visualizations requires viewers to have, or to acquire, adequate expertise in use of the relevant visualization tools. In many cases, the learning curve associated with competent use of such tools is too steep for casual users, such as members of the lay public browsing science outreach web sites or K-12 students and teachers trying to integrate such tools into their learning about geosciences. "Windows to the Universe" (http://www.windows.ucar.edu) is a large (roughly 6,000 web pages), well-established (first posted online in 1995), and popular (over 5 million visitor sessions and 40 million pages viewed per year) science education web site that covers a very broad range of Earth science and space science topics. The primary audience of the site consists of K-12 students and teachers and the general public. We have developed several interactive visualizations for use on the site in conjunction with text and still image reference materials. One major emphasis in the design of these interactives has been to ensure that casual users can quickly learn how to use the interactive features without becoming frustrated and departing before they were able to appreciate the visualizations displayed. We will demonstrate several of these "user-friendly" interactive visualizations and comment on the design philosophy we have employed in developing them.

  13. A Java-based tool for creating KML files from GPS waypoints

    NASA Astrophysics Data System (ADS)

    Kinnicutt, P. G.; Rivard, C.; Rimer, S.

    2008-12-01

    Google Earth provides a free tool with powerful capabilities for visualizing geoscience images and data. Commercial software tools exist for doing sophisticated digitizing and spatial modeling , but for the purposes of presentation, visualization and overlaying aerial images with data Google Earth provides much of the functionality. Likewise, with current technologies in GPS (Global Positioning System) systems and with Google Earth Plus, it is possible to upload GPS waypoints, tracks and routes directly into Google Earth for visualization. However, older technology GPS units and even low-cost GPS units found today may lack the necessary communications interface to a computer (e.g. no Bluetooth, no WiFi, no USB, no Serial, etc.) or may have an incompatible interface, such as a Serial port but no USB adapter available. In such cases, any waypoints, tracks and routes saved in the GPS unit or recorded in a field notebook must be manually transferred to a computer for use in a GIS system or other program. This presentation describes a Java-based tool developed by the author which enables users to enter GPS coordinates in a user-friendly manner, then save these coordinates in a Keyhole MarkUp Language (KML) file format, for visualization in Google Earth. This tool either accepts user-interactive input or accepts input from a CSV (Comma Separated Value) file, which can be generated from any spreadsheet program. This tool accepts input in the form of lat/long or UTM (Universal Transverse Mercator) coordinates. This presentation describes this system's applicability through several small case studies. This free and lightweight tool simplifies the task of manually inputting GPS data into Google Earth for people working in the field without an automated mechanism for uploading the data; for instance, the user may not have internet connectivity or may not have the proper hardware or software. Since it is a Java application and not a web- based tool, it can be installed on one's field laptop and the GPS data can be manually entered without the need for internet connectivity. This tool provides a table view of the GPS data, but lacks a KML viewer to view the data overlain on top of an aerial view, as this viewer functionality is provided in Google Earth. The tool's primary contribution lies in its more convenient method for entering the GPS data manually when automated technologies are not available.

  14. New Dimensions of GIS Data: Exploring Virtual Reality (VR) Technology for Earth Science

    NASA Astrophysics Data System (ADS)

    Skolnik, S.; Ramirez-Linan, R.

    2016-12-01

    NASA's Science Mission Directorate (SMD) Earth Science Division (ESD) Earth Science Technology Office (ESTO) and Navteca are exploring virtual reality (VR) technology as an approach and technique related to the next generation of Earth science technology information systems. Having demonstrated the value of VR in viewing pre-visualized science data encapsulated in a movie representation of a time series, further investigation has led to the additional capability of permitting the observer to interact with the data, make selections, and view volumetric data in an innovative way. The primary objective of this project has been to investigate the use of commercially available VR hardware, the Oculus Rift and the Samsung Gear VR, for scientific analysis through an interface to ArcGIS to enable the end user to order and view data from the NASA Discover-AQ mission. A virtual console is presented through the VR interface that allows the user to select various layers of data from the server in both 2D, 3D, and full 4pi steradian views. By demonstrating the utility of VR in interacting with Discover-AQ flight mission measurements, and building on previous work done at the Atmospheric Science Data Center (ASDC) at NASA Langley supporting analysis of sources of CO2 during the Discover-AQ mission, the investigation team has shown the potential for VR as a science tool beyond simple visualization.

  15. Toward virtual anatomy: a stereoscopic 3-D interactive multimedia computer program for cranial osteology.

    PubMed

    Trelease, R B

    1996-01-01

    Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures.

  16. Exploring 4D Flow Data in an Immersive Virtual Environment

    NASA Astrophysics Data System (ADS)

    Stevens, A. H.; Butkiewicz, T.

    2017-12-01

    Ocean models help us to understand and predict a wide range of intricate physical processes which comprise the atmospheric and oceanic systems of the Earth. Because these models output an abundance of complex time-varying three-dimensional (i.e., 4D) data, effectively conveying the myriad information from a given model poses a significant visualization challenge. The majority of the research effort into this problem has concentrated around synthesizing and examining methods for representing the data itself; by comparison, relatively few studies have looked into the potential merits of various viewing conditions and virtual environments. We seek to improve our understanding of the benefits offered by current consumer-grade virtual reality (VR) systems through an immersive, interactive 4D flow visualization system. Our dataset is a Regional Ocean Modeling System (ROMS) model representing a 12-hour tidal cycle of the currents within New Hampshire's Great Bay estuary. The model data was loaded into a custom VR particle system application using the OpenVR software library and the HTC Vive hardware, which tracks a headset and two six-degree-of-freedom (6DOF) controllers within a 5m-by-5m area. The resulting visualization system allows the user to coexist in the same virtual space as the data, enabling rapid and intuitive analysis of the flow model through natural interactions with the dataset and within the virtual environment. Whereas a traditional computer screen typically requires the user to reposition a virtual camera in the scene to obtain the desired view of the data, in virtual reality the user can simply move their head to the desired viewpoint, completely eliminating the mental context switches from data exploration/analysis to view adjustment and back. The tracked controllers become tools to quickly manipulate (reposition, reorient, and rescale) the dataset and to interrogate it by, e.g., releasing dye particles into the flow field, probing scalar velocities, placing a cutting plane through a region of interest, etc. It is hypothesized that the advantages afforded by head-tracked viewing and 6DOF interaction devices will lead to faster and more efficient examination of 4D flow data. A human factors study is currently being prepared to empirically evaluate this method of visualization and interaction.

  17. A visualization tool to support decision making in environmental and biological planning

    USGS Publications Warehouse

    Romañach, Stephanie S.; McKelvy, James M.; Conzelmann, Craig; Suir, Kevin J.

    2014-01-01

    Large-scale ecosystem management involves consideration of many factors for informed decision making. The EverVIEW Data Viewer is a cross-platform desktop decision support tool to help decision makers compare simulation model outputs from competing plans for restoring Florida's Greater Everglades. The integration of NetCDF metadata conventions into EverVIEW allows end-users from multiple institutions within and beyond the Everglades restoration community to share information and tools. Our development process incorporates continuous interaction with targeted end-users for increased likelihood of adoption. One of EverVIEW's signature features is side-by-side map panels, which can be used to simultaneously compare species or habitat impacts from alternative restoration plans. Other features include examination of potential restoration plan impacts across multiple geographic or tabular displays, and animation through time. As a result of an iterative, standards-driven approach, EverVIEW is relevant to large-scale planning beyond Florida, and is used in multiple biological planning efforts in the United States.

  18. ConfocalVR: Immersive Visualization Applied to Confocal Microscopy.

    PubMed

    Stefani, Caroline; Lacy-Hulbert, Adam; Skillman, Thomas

    2018-06-24

    ConfocalVR is a virtual reality (VR) application created to improve the ability of researchers to study the complexity of cell architecture. Confocal microscopes take pictures of fluorescently labeled proteins or molecules at different focal planes to create a stack of 2D images throughout the specimen. Current software applications reconstruct the 3D image and render it as a 2D projection onto a computer screen where users need to rotate the image to expose the full 3D structure. This process is mentally taxing, breaks down if you stop the rotation, and does not take advantage of the eye's full field of view. ConfocalVR exploits consumer-grade virtual reality (VR) systems to fully immerse the user in the 3D cellular image. In this virtual environment the user can: 1) adjust image viewing parameters without leaving the virtual space, 2) reach out and grab the image to quickly rotate and scale the image to focus on key features, and 3) interact with other users in a shared virtual space enabling real-time collaborative exploration and discussion. We found that immersive VR technology allows the user to rapidly understand cellular architecture and protein or molecule distribution. We note that it is impossible to understand the value of immersive visualization without experiencing it first hand, so we encourage readers to get access to a VR system, download this software, and evaluate it for yourself. The ConfocalVR software is available for download at http://www.confocalvr.com, and is free for nonprofits. Copyright © 2018. Published by Elsevier Ltd.

  19. Application of digital human modeling and simulation for vision analysis of pilots in a jet aircraft: a case study.

    PubMed

    Karmakar, Sougata; Pal, Madhu Sudan; Majumdar, Deepti; Majumdar, Dhurjati

    2012-01-01

    Ergonomic evaluation of visual demands becomes crucial for the operators/users when rapid decision making is needed under extreme time constraint like navigation task of jet aircraft. Research reported here comprises ergonomic evaluation of pilot's vision in a jet aircraft in virtual environment to demonstrate how vision analysis tools of digital human modeling software can be used effectively for such study. Three (03) dynamic digital pilot models, representative of smallest, average and largest Indian pilot population were generated from anthropometric database and interfaced with digital prototype of the cockpit in Jack software for analysis of vision within and outside the cockpit. Vision analysis tools like view cones, eye view windows, blind spot area, obscuration zone, reflection zone etc. were employed during evaluation of visual fields. Vision analysis tool was also used for studying kinematic changes of pilot's body joints during simulated gazing activity. From present study, it can be concluded that vision analysis tool of digital human modeling software was found very effective in evaluation of position and alignment of different displays and controls in the workstation based upon their priorities within the visual fields and anthropometry of the targeted users, long before the development of its physical prototype.

  20. FAST User Guide

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Clucas, Jean; McCabe, R. Kevin; Plessel, Todd; Potter, R.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The Flow Analysis Software Toolkit, FAST, is a software environment for visualizing data. FAST is a collection of separate programs (modules) that run simultaneously and allow the user to examine the results of numerical and experimental simulations. The user can load data files, perform calculations on the data, visualize the results of these calculations, construct scenes of 3D graphical objects, and plot, animate and record the scenes. Computational Fluid Dynamics (CFD) visualization is the primary intended use of FAST, but FAST can also assist in the analysis of other types of data. FAST combines the capabilities of such programs as PLOT3D, RIP, SURF, and GAS into one environment with modules that share data. Sharing data between modules eliminates the drudgery of transferring data between programs. All the modules in the FAST environment have a consistent, highly interactive graphical user interface. Most commands are entered by pointing and'clicking. The modular construction of FAST makes it flexible and extensible. The environment can be custom configured and new modules can be developed and added as needed. The following modules have been developed for FAST: VIEWER, FILE IO, CALCULATOR, SURFER, TOPOLOGY, PLOTTER, TITLER, TRACER, ARCGRAPH, GQ, SURFERU, SHOTET, and ISOLEVU. A utility is also included to make the inclusion of user defined modules in the FAST environment easy. The VIEWER module is the central control for the FAST environment. From VIEWER, the user can-change object attributes, interactively position objects in three-dimensional space, define and save scenes, create animations, spawn new FAST modules, add additional view windows, and save and execute command scripts. The FAST User Guide uses text and FAST MAPS (graphical representations of the entire user interface) to guide the user through the use of FAST. Chapters include: Maps, Overview, Tips, Getting Started Tutorial, a separate chapter for each module, file formats, and system administration.

  1. Towards An Understanding of Mobile Touch Navigation in a Stereoscopic Viewing Environment for 3D Data Exploration.

    PubMed

    López, David; Oehlberg, Lora; Doger, Candemir; Isenberg, Tobias

    2016-05-01

    We discuss touch-based navigation of 3D visualizations in a combined monoscopic and stereoscopic viewing environment. We identify a set of interaction modes, and a workflow that helps users transition between these modes to improve their interaction experience. In our discussion we analyze, in particular, the control-display space mapping between the different reference frames of the stereoscopic and monoscopic displays. We show how this mapping supports interactive data exploration, but may also lead to conflicts between the stereoscopic and monoscopic views due to users' movement in space; we resolve these problems through synchronization. To support our discussion, we present results from an exploratory observational evaluation with domain experts in fluid mechanics and structural biology. These experts explored domain-specific datasets using variations of a system that embodies the interaction modes and workflows; we report on their interactions and qualitative feedback on the system and its workflow.

  2. Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer

    PubMed Central

    Gutman, David A.; Dunn, William D.; Cobb, Jake; Stoner, Richard M.; Kalpathy-Cramer, Jayashree; Erickson, Bradley

    2014-01-01

    Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance. PMID:24904399

  3. Integrating natural language processing and web GIS for interactive knowledge domain visualization

    NASA Astrophysics Data System (ADS)

    Du, Fangming

    Recent years have seen a powerful shift towards data-rich environments throughout society. This has extended to a change in how the artifacts and products of scientific knowledge production can be analyzed and understood. Bottom-up approaches are on the rise that combine access to huge amounts of academic publications with advanced computer graphics and data processing tools, including natural language processing. Knowledge domain visualization is one of those multi-technology approaches, with its aim of turning domain-specific human knowledge into highly visual representations in order to better understand the structure and evolution of domain knowledge. For example, network visualizations built from co-author relations contained in academic publications can provide insight on how scholars collaborate with each other in one or multiple domains, and visualizations built from the text content of articles can help us understand the topical structure of knowledge domains. These knowledge domain visualizations need to support interactive viewing and exploration by users. Such spatialization efforts are increasingly looking to geography and GIS as a source of metaphors and practical technology solutions, even when non-georeferenced information is managed, analyzed, and visualized. When it comes to deploying spatialized representations online, web mapping and web GIS can provide practical technology solutions for interactive viewing of knowledge domain visualizations, from panning and zooming to the overlay of additional information. This thesis presents a novel combination of advanced natural language processing - in the form of topic modeling - with dimensionality reduction through self-organizing maps and the deployment of web mapping/GIS technology towards intuitive, GIS-like, exploration of a knowledge domain visualization. A complete workflow is proposed and implemented that processes any corpus of input text documents into a map form and leverages a web application framework to let users explore knowledge domain maps interactively. This workflow is implemented and demonstrated for a data set of more than 66,000 conference abstracts.

  4. Neuronvisio: A Graphical User Interface with 3D Capabilities for NEURON.

    PubMed

    Mattioni, Michele; Cohen, Uri; Le Novère, Nicolas

    2012-01-01

    The NEURON simulation environment is a commonly used tool to perform electrical simulation of neurons and neuronal networks. The NEURON User Interface, based on the now discontinued InterViews library, provides some limited facilities to explore models and to plot their simulation results. Other limitations include the inability to generate a three-dimensional visualization, no standard mean to save the results of simulations, or to store the model geometry within the results. Neuronvisio (http://neuronvisio.org) aims to address these deficiencies through a set of well designed python APIs and provides an improved UI, allowing users to explore and interact with the model. Neuronvisio also facilitates access to previously published models, allowing users to browse, download, and locally run NEURON models stored in ModelDB. Neuronvisio uses the matplotlib library to plot simulation results and uses the HDF standard format to store simulation results. Neuronvisio can be viewed as an extension of NEURON, facilitating typical user workflows such as model browsing, selection, download, compilation, and simulation. The 3D viewer simplifies the exploration of complex model structure, while matplotlib permits the plotting of high-quality graphs. The newly introduced ability of saving numerical results allows users to perform additional analysis on their previous simulations.

  5. Style grammars for interactive visualization of architecture.

    PubMed

    Aliaga, Daniel G; Rosen, Paul A; Bekins, Daniel R

    2007-01-01

    Interactive visualization of architecture provides a way to quickly visualize existing or novel buildings and structures. Such applications require both fast rendering and an effortless input regimen for creating and changing architecture using high-level editing operations that automatically fill in the necessary details. Procedural modeling and synthesis is a powerful paradigm that yields high data amplification and can be coupled with fast-rendering techniques to quickly generate plausible details of a scene without much or any user interaction. Previously, forward generating procedural methods have been proposed where a procedure is explicitly created to generate particular content. In this paper, we present our work in inverse procedural modeling of buildings and describe how to use an extracted repertoire of building grammars to facilitate the visualization and quick modification of architectural structures and buildings. We demonstrate an interactive application where the user draws simple building blocks and, using our system, can automatically complete the building "in the style of" other buildings using view-dependent texture mapping or nonphotorealistic rendering techniques. Our system supports an arbitrary number of building grammars created from user subdivided building models and captured photographs. Using only edit, copy, and paste metaphors, the entire building styles can be altered and transferred from one building to another in a few operations, enhancing the ability to modify an existing architectural structure or to visualize a novel building in the style of the others.

  6. A teleoperation training simulator with visual and kinesthetic force virtual reality

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul

    1992-01-01

    A force-reflecting teleoperation training simulator with a high-fidelity real-time graphics display has been developed for operator training. A novel feature of this simulator is that it enables the operator to feel contact forces and torques through a force-reflecting controller during the execution of the simulated peg-in-hole task, providing the operator with the feel of visual and kinesthetic force virtual reality. A peg-in-hole task is used in our simulated teleoperation trainer as a generic teleoperation task. A quasi-static analysis of a two-dimensional peg-in-hole task model has been extended to a three-dimensional model analysis to compute contact forces and torques for a virtual realization of kinesthetic force feedback. The simulator allows the user to specify force reflection gains and stiffness (compliance) values of the manipulator hand for both the three translational and the three rotational axes in Cartesian space. Three viewing modes are provided for graphics display: single view, two split views, and stereoscopic view.

  7. CytoSEED: a Cytoscape plugin for viewing, manipulating and analyzing metabolic models created by the Model SEED

    PubMed Central

    DeJongh, Matthew; Bockstege, Benjamin; Frybarger, Paul; Hazekamp, Nicholas; Kammeraad, Joshua; McGeehan, Travis

    2012-01-01

    Summary: CytoSEED is a Cytoscape plugin for viewing, manipulating and analyzing metabolic models created using the Model SEED. The CytoSEED plugin enables users of the Model SEED to create informative visualizations of the reaction networks generated for their organisms of interest. These visualizations are useful for understanding organism-specific biochemistry and for highlighting the results of flux variability analysis experiments. Availability and Implementation: Freely available for download on the web at http://sourceforge.net/projects/cytoseed/. Implemented in Java SE 6 and supported on all platforms that support Cytoscape. Contact: dejongh@hope.edu Supplementary information: Installation instructions, a tutorial, and full-size figures are available at http://www.cs.hope.edu/cytoseed/. PMID:22210867

  8. Genomicus 2018: karyotype evolutionary trees and on-the-fly synteny computing.

    PubMed

    Nguyen, Nga Thi Thuy; Vincens, Pierre; Roest Crollius, Hugues; Louis, Alexandra

    2018-01-04

    Since 2010, the Genomicus web server is available online at http://genomicus.biologie.ens.fr/genomicus. This graphical browser provides access to comparative genomic analyses in four different phyla (Vertebrate, Plants, Fungi, and non vertebrate Metazoans). Users can analyse genomic information from extant species, as well as ancestral gene content and gene order for vertebrates and flowering plants, in an integrated evolutionary context. New analyses and visualization tools have recently been implemented in Genomicus Vertebrate. Karyotype structures from several genomes can now be compared along an evolutionary pathway (Multi-KaryotypeView), and synteny blocks can be computed and visualized between any two genomes (PhylDiagView). © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Visualization of High-Resolution LiDAR Topography in Google Earth

    NASA Astrophysics Data System (ADS)

    Crosby, C. J.; Nandigam, V.; Arrowsmith, R.; Blair, J. L.

    2009-12-01

    The growing availability of high-resolution LiDAR (Light Detection And Ranging) topographic data has proven to be revolutionary for Earth science research. These data allow scientists to study the processes acting on the Earth’s surfaces at resolutions not previously possible yet essential for their appropriate representation. In addition to their utility for research, the data have also been recognized as powerful tools for communicating earth science concepts for education and outreach purposes. Unfortunately, the massive volume of data produced by LiDAR mapping technology can be a barrier to their use. To facilitate access to these powerful data for research and educational purposes, we have been exploring the use of Keyhole Markup Language (KML) and Google Earth to deliver LiDAR-derived visualizations. The OpenTopography Portal (http://www.opentopography.org/) is a National Science Foundation-funded facility designed to provide access to Earth science-oriented LiDAR data. OpenTopography hosts a growing collection of LiDAR data for a variety of geologic domains, including many of the active faults in the western United States. We have found that the wide spectrum of LiDAR users have variable scientific applications, computing resources, and technical experience and thus require a data distribution system that provides various levels of access to the data. For users seeking a synoptic view of the data, and for education and outreach purposes, delivering full-resolution images derived from LiDAR topography into the Google Earth virtual globe is powerful. The virtual globe environment provides a freely available and easily navigated viewer and enables quick integration of the LiDAR visualizations with imagery, geographic layers, and other relevant data available in KML format. Through region-dependant network linked KML, OpenTopography currently delivers over 20 GB of LiDAR-derived imagery to users via simple, easily downloaded KMZ files hosted at the Portal. This method provides seamlessly access to hillshaded imagery for both bare earth and first return terrain models with various angles of illumination. Seamless access to LiDAR-derived imagery in Google Earth has proven to be the most popular product available in the OpenTopography Portal. The hillshade KMZ files have been downloaded over 3000 times by users ranging from earthquake scientists to K-12 educators who wish to introduce cutting edge real world data into their earth science lessons. OpenTopography also provides dynamically generated KMZ visualizations of LiDAR data products produced when users choose to use the OpenTopography point cloud access and processing system. These Google Earth compatible products allow users to quickly visualize the custom terrain products they have generated without the burden of loading the data into a GIS environment. For users who have installed the Google Earth browser plug-in, these visualizations can be launched directly from the OpenTopography results page and viewed directly in the browser.

  10. Integrating Patient-Reported Outcomes into Spine Surgical Care through Visual Dashboards: Lessons Learned from Human-Centered Design.

    PubMed

    Hartzler, Andrea L; Chaudhuri, Shomir; Fey, Brett C; Flum, David R; Lavallee, Danielle

    2015-01-01

    The collection of patient-reported outcomes (PROs) draws attention to issues of importance to patients-physical function and quality of life. The integration of PRO data into clinical decisions and discussions with patients requires thoughtful design of user-friendly interfaces that consider user experience and present data in personalized ways to enhance patient care. Whereas most prior work on PROs focuses on capturing data from patients, little research details how to design effective user interfaces that facilitate use of this data in clinical practice. We share lessons learned from engaging health care professionals to inform design of visual dashboards, an emerging type of health information technology (HIT). We employed human-centered design (HCD) methods to create visual displays of PROs to support patient care and quality improvement. HCD aims to optimize the design of interactive systems through iterative input from representative users who are likely to use the system in the future. Through three major steps, we engaged health care professionals in targeted, iterative design activities to inform the development of a PRO Dashboard that visually displays patient-reported pain and disability outcomes following spine surgery. Design activities to engage health care administrators, providers, and staff guided our work from design concept to specifications for dashboard implementation. Stakeholder feedback from these health care professionals shaped user interface design features, including predefined overviews that illustrate at-a-glance trends and quarterly snapshots, granular data filters that enable users to dive into detailed PRO analytics, and user-defined views to share and reuse. Feedback also revealed important considerations for quality indicators and privacy-preserving sharing and use of PROs. Our work illustrates a range of engagement methods guided by human-centered principles and design recommendations for optimizing PRO Dashboards for patient care and quality improvement. Engaging health care professionals as stakeholders is a critical step toward the design of user-friendly HIT that is accepted, usable, and has the potential to enhance quality of care and patient outcomes.

  11. Journey Mapping the User Experience

    ERIC Educational Resources Information Center

    Samson, Sue; Granath, Kim; Alger, Adrienne

    2017-01-01

    This journey-mapping pilot study was designed to determine whether journey mapping is an effective method to enhance the student experience of using the library by assessing our services from their point of view. Journey mapping plots a process or service to produce a visual representation of a library transaction--from the point at which the…

  12. AccessScope project: Accessible light microscope for users with upper limb mobility or visual impairments.

    PubMed

    Mansoor, Awais; Ahmed, Wamiq M; Samarapungavan, Ala; Cirillo, John; Schwarte, David; Robinson, J Paul; Duerstock, Bradley S

    2010-01-01

    A web-based application was developed to remotely view slide specimens and control all functions of a research-level light microscopy workstation, called AccessScope. Students and scientists with upper limb mobility and visual impairments are often unable to use a light microscope by themselves and must depend on others in its operation. Users with upper limb mobility impairments and low vision were recruited to assist in the design process of the AccessScope personal computer (PC) user interface. Participants with these disabilities were evaluated in their ability to use AccessScope to perform microscopical tasks. AccessScope usage was compared with inspecting prescanned slide images by grading participants' identification and understanding of histological features and knowledge of microscope operation. With AccessScope subjects were able to independently perform common light microscopy functions through an Internet browser by employing different PC pointing devices or accessibility software according to individual abilities. Subjects answered more histology and microscope usage questions correctly after first participating in an AccessScope test session. AccessScope allowed users with upper limb or visual impairments to successfully perform light microscopy without assistance. This unprecedented capability is crucial for students and scientists with disabilities to perform laboratory coursework or microscope-based research and pursue science, technology, engineering, and mathematics fields.

  13. Discriminative Multi-View Interactive Image Re-Ranking.

    PubMed

    Li, Jun; Xu, Chang; Yang, Wankou; Sun, Changyin; Tao, Dacheng

    2017-07-01

    Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multi-view interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users' intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies.

  14. Collaborative Visualization and Analysis of Multi-dimensional, Time-dependent and Distributed Data in the Geosciences Using the Unidata Integrated Data Viewer

    NASA Astrophysics Data System (ADS)

    Meertens, C. M.; Murray, D.; McWhirter, J.

    2004-12-01

    Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and control the visualization. A Jython-based formulation facility allows computations on disparate data sets using simple formulas. Although the IDV is an advanced tool for research, its flexible architecture has also been exploited for educational purposes with the Virtual Geophysical Exploration Environment (VGEE) development. The VGEE demonstration added physical concept models to the IDV and curricula for atmospheric science education intended for the high school to graduate student levels.

  15. Is eye damage caused by stereoscopic displays?

    NASA Astrophysics Data System (ADS)

    Mayer, Udo; Neumann, Markus D.; Kubbat, Wolfgang; Landau, Kurt

    2000-05-01

    A normal developing child will achieve emmetropia in youth and maintain it. Thereby cornea, lens and axial length of the eye grow astonishingly coordinated. In the last years research has evidenced that this coordinated growing process is a visually controlled closed loop. The mechanism has been studied particularly in animals. It was found that the growth of the axial length of the eyeball is controlled by image focus information from the retina. It was shown that maladjustment can occur by this visually-guided growth control mechanism that result in ametropia. Thereby it has been proven that e.g. short-sightedness is not only caused by heredity, but is acquired under certain visual conditions. It is shown that these conditions are similar to the conditions of viewing stereoscopic displays where the normal accommodation convergence coupling is disjoint. An evaluation is given of the potential of damaging the eyes by viewing stereoscopic displays. Concerning this, different viewing methods for stereoscopic displays are evaluated. Moreover, clues are given how the environment and display conditions shall be set and what users shall be chosen to minimize the risk of eye damages.

  16. In-Situ Visualization Experiments with ParaView Cinema in RAGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kares, Robert John

    2015-10-15

    A previous paper described some numerical experiments performed using the ParaView/Catalyst in-situ visualization infrastructure deployed in the Los Alamos RAGE radiation-hydrodynamics code to produce images from a running large scale 3D ICF simulation. One challenge of the in-situ approach apparent in these experiments was the difficulty of choosing parameters likes isosurface values for the visualizations to be produced from the running simulation without the benefit of prior knowledge of the simulation results and the resultant cost of recomputing in-situ generated images when parameters are chosen suboptimally. A proposed method of addressing this difficulty is to simply render multiple images atmore » runtime with a range of possible parameter values to produce a large database of images and to provide the user with a tool for managing the resulting database of imagery. Recently, ParaView/Catalyst has been extended to include such a capability via the so-called Cinema framework. Here I describe some initial experiments with the first delivery of Cinema and make some recommendations for future extensions of Cinema’s capabilities.« less

  17. DataFed: A Federated Data System for Visualization and Analysis of Spatio-Temporal Air Quality Data

    NASA Astrophysics Data System (ADS)

    Husar, R. B.; Hoijarvi, K.

    2017-12-01

    DataFed is a distributed web-services-based computing environment for accessing, processing, and visualizing atmospheric data in support of air quality science and management. The flexible, adaptive environment facilitates the access and flow of atmospheric data from provider to users by enabling the creation of user-driven data processing/visualization applications. DataFed `wrapper' components, non-intrusively wrap heterogeneous, distributed datasets for access by standards-based GIS web services. The mediator components (also web services) map the heterogeneous data into a spatio-temporal data model. Chained web services provide homogeneous data views (e.g., geospatial, time views) using a global multi-dimensional data model. In addition to data access and rendering, the data processing component services can be programmed for filtering, aggregation, and fusion of multidimensional data. A complete application software is written in a custom made data flow language. Currently, the federated data pool consists of over 50 datasets originating from globally distributed data providers delivering surface-based air quality measurements, satellite observations, emissions data as well as regional and global-scale air quality models. The web browser-based user interface allows point and click navigation and browsing the XYZT multi-dimensional data space. The key applications of DataFed are for exploring spatial pattern of pollutants, seasonal, weekly, diurnal cycles and frequency distributions for exploratory air quality research. Since 2008, DataFed has been used to support EPA in the implementation of the Exceptional Event Rule. The data system is also used at universities in the US, Europe and Asia.

  18. Interactive Visualization of Healthcare Data Using Tableau.

    PubMed

    Ko, Inseok; Chang, Hyejung

    2017-10-01

    Big data analysis is receiving increasing attention in many industries, including healthcare. Visualization plays an important role not only in intuitively showing the results of data analysis but also in the whole process of collecting, cleaning, analyzing, and sharing data. This paper presents a procedure for the interactive visualization and analysis of healthcare data using Tableau as a business intelligence tool. Starting with installation of the Tableau Desktop Personal version 10.3, this paper describes the process of understanding and visualizing healthcare data using an example. The example data of colon cancer patients were obtained from health insurance claims in years 2012 and 2013, provided by the Health Insurance Review and Assessment Service. To explore the visualization of healthcare data using Tableau for beginners, this paper describes the creation of a simple view for the average length of stay of colon cancer patients. Since Tableau provides various visualizations and customizations, the level of analysis can be increased with small multiples, view filtering, mark cards, and Tableau charts. Tableau is a software that can help users explore and understand their data by creating interactive visualizations. The software has the advantages that it can be used in conjunction with almost any database, and it is easy to use by dragging and dropping to create an interactive visualization expressing the desired format.

  19. cellVIEW: a Tool for Illustrative and Multi-Scale Rendering of Large Biomolecular Datasets

    PubMed Central

    Le Muzic, Mathieu; Autin, Ludovic; Parulek, Julius; Viola, Ivan

    2017-01-01

    In this article we introduce cellVIEW, a new system to interactively visualize large biomolecular datasets on the atomic level. Our tool is unique and has been specifically designed to match the ambitions of our domain experts to model and interactively visualize structures comprised of several billions atom. The cellVIEW system integrates acceleration techniques to allow for real-time graphics performance of 60 Hz display rate on datasets representing large viruses and bacterial organisms. Inspired by the work of scientific illustrators, we propose a level-of-detail scheme which purpose is two-fold: accelerating the rendering and reducing visual clutter. The main part of our datasets is made out of macromolecules, but it also comprises nucleic acids strands which are stored as sets of control points. For that specific case, we extend our rendering method to support the dynamic generation of DNA strands directly on the GPU. It is noteworthy that our tool has been directly implemented inside a game engine. We chose to rely on a third party engine to reduce software development work-load and to make bleeding-edge graphics techniques more accessible to the end-users. To our knowledge cellVIEW is the only suitable solution for interactive visualization of large bimolecular landscapes on the atomic level and is freely available to use and extend. PMID:29291131

  20. Phylo-VISTA: Interactive visualization of multiple DNA sequence alignments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Nameeta; Couronne, Olivier; Pennacchio, Len A.

    The power of multi-sequence comparison for biological discovery is well established. The need for new capabilities to visualize and compare cross-species alignment data is intensified by the growing number of genomic sequence datasets being generated for an ever-increasing number of organisms. To be efficient these visualization algorithms must support the ability to accommodate consistently a wide range of evolutionary distances in a comparison framework based upon phylogenetic relationships. Results: We have developed Phylo-VISTA, an interactive tool for analyzing multiple alignments by visualizing a similarity measure for multiple DNA sequences. The complexity of visual presentation is effectively organized using a frameworkmore » based upon interspecies phylogenetic relationships. The phylogenetic organization supports rapid, user-guided interspecies comparison. To aid in navigation through large sequence datasets, Phylo-VISTA leverages concepts from VISTA that provide a user with the ability to select and view data at varying resolutions. The combination of multiresolution data visualization and analysis, combined with the phylogenetic framework for interspecies comparison, produces a highly flexible and powerful tool for visual data analysis of multiple sequence alignments. Availability: Phylo-VISTA is available at http://www-gsd.lbl. gov/phylovista. It requires an Internet browser with Java Plugin 1.4.2 and it is integrated into the global alignment program LAGAN at http://lagan.stanford.edu« less

  1. DVV: a taxonomy for mixed reality visualization in image guided surgery.

    PubMed

    Kersten-Oertel, Marta; Jannin, Pierre; Collins, D Louis

    2012-02-01

    Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.

  2. The PANTHER User Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coram, Jamie L.; Morrow, James D.; Perkins, David Nikolaus

    2015-09-01

    This document describes the PANTHER R&D Application, a proof-of-concept user interface application developed under the PANTHER Grand Challenge LDRD. The purpose of the application is to explore interaction models for graph analytics, drive algorithmic improvements from an end-user point of view, and support demonstration of PANTHER technologies to potential customers. The R&D Application implements a graph-centric interaction model that exposes analysts to the algorithms contained within the GeoGraphy graph analytics library. Users define geospatial-temporal semantic graph queries by constructing search templates based on nodes, edges, and the constraints among them. Users then analyze the results of the queries using bothmore » geo-spatial and temporal visualizations. Development of this application has made user experience an explicit driver for project and algorithmic level decisions that will affect how analysts one day make use of PANTHER technologies.« less

  3. Iowa Flood Information System: Towards Integrated Data Management, Analysis and Visualization

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.; Goska, R.; Mantilla, R.; Weber, L. J.; Young, N.

    2012-04-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 500 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview and live demonstration of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.

  4. A Model-Driven Visualization Tool for Use with Model-Based Systems Engineering Projects

    NASA Technical Reports Server (NTRS)

    Trase, Kathryn; Fink, Eric

    2014-01-01

    Model-Based Systems Engineering (MBSE) promotes increased consistency between a system's design and its design documentation through the use of an object-oriented system model. The creation of this system model facilitates data presentation by providing a mechanism from which information can be extracted by automated manipulation of model content. Existing MBSE tools enable model creation, but are often too complex for the unfamiliar model viewer to easily use. These tools do not yet provide many opportunities for easing into the development and use of a system model when system design documentation already exists. This study creates a Systems Modeling Language (SysML) Document Traceability Framework (SDTF) for integrating design documentation with a system model, and develops an Interactive Visualization Engine for SysML Tools (InVEST), that exports consistent, clear, and concise views of SysML model data. These exported views are each meaningful to a variety of project stakeholders with differing subjects of concern and depth of technical involvement. InVEST allows a model user to generate multiple views and reports from a MBSE model, including wiki pages and interactive visualizations of data. System data can also be filtered to present only the information relevant to the particular stakeholder, resulting in a view that is both consistent with the larger system model and other model views. Viewing the relationships between system artifacts and documentation, and filtering through data to see specialized views improves the value of the system as a whole, as data becomes information

  5. Creating a Prototype Web Application for Spacecraft Real-Time Data Visualization on Mobile Devices

    NASA Technical Reports Server (NTRS)

    Lang, Jeremy S.; Irving, James R.

    2014-01-01

    Mobile devices (smart phones, tablets) have become commonplace among almost all sectors of the workforce, especially in the technical and scientific communities. These devices provide individuals the ability to be constantly connected to any area of interest they may have, whenever and wherever they are located. The Huntsville Operations Support Center (HOSC) is attempting to take advantage of this constant connectivity to extend the data visualization component of the Payload Operations and Integration Center (POIC) to a person's mobile device. POIC users currently have a rather unique capability to create custom user interfaces in order to view International Space Station (ISS) payload health and status telemetry. These displays are used at various console positions within the POIC. The Software Engineering team has created a Mobile Display capability that will allow authenticated users to view the same displays created for the console positions on the mobile device of their choice. Utilizing modern technologies including ASP.net, JavaScript, and HTML5, we have created a web application that renders the user's displays in any modern desktop or mobile web browser, regardless of the operating system on the device. Additionally, the application is device aware which enables it to render its configuration and selection menus with themes that correspond to the particular device. The Mobile Display application uses a communication mechanism known as signalR to push updates to the web client. This communication mechanism automatically detects the best communication protocol between the client and server and also manages disconnections and reconnections of the client to the server. One benefit of this application is that the user can monitor important telemetry even while away from their console position. If expanded to the scientific community, this application would allow a scientist to view a snapshot of the state of their particular experiment at any time or place. Because the web application renders the displays that can currently be created with the POIC ground system, the user can tailor their displays for a particular device using tools that they are already trained to use.

  6. Facilitating Navigation Through Large Archives

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.; Smith, Stephanie L.; Troung, Dat; Hodgson, Terry R.

    2005-01-01

    Automated Visual Access (AVA) is a computer program that effectively makes a large collection of information visible in a manner that enables a user to quickly and efficiently locate information resources, with minimal need for conventional keyword searches and perusal of complex hierarchical directory systems. AVA includes three key components: (1) a taxonomy that comprises a collection of words and phrases, clustered according to meaning, that are used to classify information resources; (2) a statistical indexing and scoring engine; and (3) a component that generates a graphical user interface that uses the scoring data to generate a visual map of resources and topics. The top level of an AVA display is a pictorial representation of an information archive. The user enters the depicted archive by either clicking on a depiction of subject area cluster, selecting a topic from a list, or entering a query into a text box. The resulting display enables the user to view candidate information entities at various levels of detail. Resources are grouped spatially by topic with greatest generality at the top layer and increasing detail with depth. The user can zoom in or out of specific sites or into greater or lesser content detail.

  7. The CommonGround Visual Paradigm for Biosurveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livnat, Yarden; Jurrus, Elizabeth R.; Gundlapalli, Adi V.

    2013-06-14

    Biosurveillance is a critical area in the intelligence community for real-time detection of disease outbreaks. Identifying epidemics enables analysts to detect and monitor disease outbreaks that might be spread from natural causes or from possible biological warfare attacks. Containing these events and disseminating alerts requires the ability to rapidly find, classify and track harmful biological signatures. In this paper, we describe a novel visual paradigm to conduct biosurveillance using an Infectious Disease Weather Map. Our system provides a visual common ground in which users can view, explore and discover emerging concepts and correlations such as symptoms, syndromes, pathogens, and geographicmore » locations.« less

  8. Stereoscopic, Force-Feedback Trainer For Telerobot Operators

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.

    1994-01-01

    Computer-controlled simulator for training technicians to operate remote robots provides both visual and kinesthetic virtual reality. Used during initial stage of training; saves time and expense, increases operational safety, and prevents damage to robots by inexperienced operators. Computes virtual contact forces and torques of compliant robot in real time, providing operator with feel of forces experienced by manipulator as well as view in any of three modes: single view, two split views, or stereoscopic view. From keyboard, user specifies force-reflection gain and stiffness of manipulator hand for three translational and three rotational axes. System offers two simulated telerobotic tasks: insertion of peg in hole in three dimensions, and removal and insertion of drawer.

  9. An interactive app for color deficient viewers

    NASA Astrophysics Data System (ADS)

    Lau, Cheryl; Perdu, Nicolas; Rodríguez-Pardo, Carlos E.; Süsstrunk, Sabine; Sharma, Gaurav

    2015-01-01

    Color deficient individuals have trouble seeing color contrasts that could be very apparent to individuals with normal color vision. For example, for some color deficient individuals, red and green apples do not have the striking contrast they have for those with normal color vision, or the abundance of red cherries in a tree is not immediately clear due to a lack of perceived contrast. We present a smartphone app that enables color deficient users to visualize such problematic color contrasts in order to help them with daily tasks. The user interacts with the app through the touchscreen. As the user traces a path around the touchscreen, the colors in the image change continuously via a transform that enhances contrasts that are weak or imperceptible for the user under native viewing conditions. Specifically, we propose a transform that shears the data along lines parallel to the dimension corresponding to the affected cone sensitivity of the user. The amount and direction of shear are controlled by the user's finger movement over the touchscreen allowing them to visualize these contrasts. Using the GPU, this simple transformation, consisting of a linear shear and translation, is performed efficiently on each pixel and in real-time with the changing position of the user's finger. The user can use the app to aid daily tasks such as distinguishing between red and green apples or picking out ripe bananas.

  10. Emotion scents: a method of representing user emotions on GUI widgets

    NASA Astrophysics Data System (ADS)

    Cernea, Daniel; Weber, Christopher; Ebert, Achim; Kerren, Andreas

    2013-01-01

    The world of desktop interfaces has been dominated for years by the concept of windows and standardized user interface (UI) components. Still, while supporting the interaction and information exchange between the users and the computer system, graphical user interface (GUI) widgets are rather one-sided, neglecting to capture the subjective facets of the user experience. In this paper, we propose a set of design guidelines for visualizing user emotions on standard GUI widgets (e.g., buttons, check boxes, etc.) in order to enrich the interface with a new dimension of subjective information by adding support for emotion awareness as well as post-task analysis and decision making. We highlight the use of an EEG headset for recording the various emotional states of the user while he/she is interacting with the widgets of the interface. We propose a visualization approach, called emotion scents, that allows users to view emotional reactions corresponding to di erent GUI widgets without in uencing the layout or changing the positioning of these widgets. Our approach does not focus on highlighting the emotional experience during the interaction with an entire system, but on representing the emotional perceptions and reactions generated by the interaction with a particular UI component. Our research is motivated by enabling emotional self-awareness and subjectivity analysis through the proposed emotionenhanced UI components for desktop interfaces. These assumptions are further supported by an evaluation of emotion scents.

  11. Video-Game-Like Engine for Depicting Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Upchurch, Paul R.

    2009-01-01

    GoView is a video-game-like software engine, written in the C and C++ computing languages, that enables real-time, three-dimensional (3D)-appearing visual representation of spacecraft and trajectories (1) from any perspective; (2) at any spatial scale from spacecraft to Solar-system dimensions; (3) in user-selectable time scales; (4) in the past, present, and/or future; (5) with varying speeds; and (6) forward or backward in time. GoView constructs an interactive 3D world by use of spacecraft-mission data from pre-existing engineering software tools. GoView can also be used to produce distributable application programs for depicting NASA orbital missions on personal computers running the Windows XP, Mac OsX, and Linux operating systems. GoView enables seamless rendering of Cartesian coordinate spaces with programmable graphics hardware, whereas prior programs for depicting spacecraft trajectories variously require non-Cartesian coordinates and/or are not compatible with programmable hardware. GoView incorporates an algorithm for nonlinear interpolation between arbitrary reference frames, whereas the prior programs are restricted to special classes of inertial and non-inertial reference frames. Finally, whereas the prior programs present complex user interfaces requiring hours of training, the GoView interface provides guidance, enabling use without any training.

  12. Hanford Borehole Geologic Information System (HBGIS) Updated User’s Guide for Web-based Data Access and Export

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackley, Rob D.; Last, George V.; Allwardt, Craig H.

    2008-09-24

    The Hanford Borehole Geologic Information System (HBGIS) is a prototype web-based graphical user interface (GUI) for viewing and downloading borehole geologic data. The HBGIS is being developed as part of the Remediation Decision Support function of the Soil and Groundwater Remediation Project, managed by Fluor Hanford, Inc., Richland, Washington. Recent efforts have focused on improving the functionality of the HBGIS website in order to allow more efficient access and exportation of available data in HBGIS. Users will benefit from enhancements such as a dynamic browsing, user-driven forms, and multi-select options for selecting borehole geologic data for export. The need formore » translating borehole geologic data into electronic form within the HBGIS continues to increase, and efforts to populate the database continue at an increasing rate. These new web-based tools should help the end user quickly visualize what data are available in HBGIS, select from among these data, and download the borehole geologic data into a consistent and reproducible tabular form. This revised user’s guide supersedes the previous user’s guide (PNNL-15362) for viewing and downloading data from HBGIS. It contains an updated data dictionary for tables and fields containing borehole geologic data as well as instructions for viewing and downloading borehole geologic data.« less

  13. Visualize Your Data with Google Fusion Tables

    NASA Astrophysics Data System (ADS)

    Brisbin, K. E.

    2011-12-01

    Google Fusion Tables is a modern data management platform that makes it easy to host, manage, collaborate on, visualize, and publish tabular data online. Fusion Tables allows users to upload their own data to the Google cloud, which they can then use to create compelling and interactive visualizations with the data. Users can view data on a Google Map, plot data in a line chart, or display data along a timeline. Users can share these visualizations with others to explore and discover interesting trends about various types of data, including scientific data such as invasive species or global trends in disease. Fusion Tables has been used by many organizations to visualize a variety of scientific data. One example is the California Redistricting Map created by the LA Times: http://goo.gl/gwZt5 The Pacific Institute and Circle of Blue have used Fusion Tables to map the quality of water around the world: http://goo.gl/T4SX8 The World Resources Institute mapped the threat level of coral reefs using Fusion Tables: http://goo.gl/cdqe8 What attendees will learn in this session: This session will cover all the steps necessary to use Fusion Tables to create a variety of interactive visualizations. Attendees will begin by learning about the various options for uploading data into Fusion Tables, including Shapefile, KML file, and CSV file import. Attendees will then learn how to use Fusion Tables to manage their data by merging it with other data and controlling the permissions of the data. Finally, the session will cover how to create a customized visualization from the data, and share that visualization with others using both Fusion Tables and the Google Maps API.

  14. BoreholeAR: A mobile tablet application for effective borehole database visualization using an augmented reality technology

    NASA Astrophysics Data System (ADS)

    Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong

    2015-03-01

    Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.

  15. The social computing room: a multi-purpose collaborative visualization environment

    NASA Astrophysics Data System (ADS)

    Borland, David; Conway, Michael; Coposky, Jason; Ginn, Warren; Idaszak, Ray

    2010-01-01

    The Social Computing Room (SCR) is a novel collaborative visualization environment for viewing and interacting with large amounts of visual data. The SCR consists of a square room with 12 projectors (3 per wall) used to display a single 360-degree desktop environment that provides a large physical real estate for arranging visual information. The SCR was designed to be cost-effective, collaborative, configurable, widely applicable, and approachable for naive users. Because the SCR displays a single desktop, a wide range of applications is easily supported, making it possible for a variety of disciplines to take advantage of the room. We provide a technical overview of the room and highlight its application to scientific visualization, arts and humanities projects, research group meetings, and virtual worlds, among other uses.

  16. Virtual Reality: You Are There

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Telepresence or "virtual reality," allows a person, with assistance from advanced technology devices, to figuratively project himself into another environment. This technology is marketed by several companies, among them Fakespace, Inc., a former Ames Research Center contractor. Fakespace developed a teleoperational motion platform for transmitting sounds and images from remote locations. The "Molly" matches the user's head motion and, when coupled with a stereo viewing device and appropriate software, creates the telepresence experience. Its companion piece is the BOOM-the user's viewing device that provides the sense of involvement in the virtual environment. Either system may be used alone. Because suits, gloves, headphones, etc. are not needed, a whole range of commercial applications is possible, including computer-aided design techniques and virtual reality visualizations. Customers include Sandia National Laboratories, Stanford Research Institute and Mattel Toys.

  17. An Eclectic Look At Viewing Station Design

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Horii, Howard N.; Kowalski, Philip

    1988-06-01

    Imaging workstations for radiology would be used by radiologists for a number of hours each day. Such long use demands a good ergonomic design of the workstation in order to avoid user fatigue and frustration. The film-and-viewbox methods presently in use have evolved over the years since radiography became a diagnostic tool. The result of this evolution is that, despite the problems of film, the ergonomics of film reading and reporting is quite mature. This paper will use a somewhat lighthearted look at workstation design using the ideas of well-known architects and designers to illustrate points which should be considered when implementing electronic viewing systems. Examples will also be drawn from non-radiologic environments in which the user is presented with visual information for his or her integration.

  18. Authoring Tours of Geospatial Data With KML and Google Earth

    NASA Astrophysics Data System (ADS)

    Barcay, D. P.; Weiss-Malik, M.

    2008-12-01

    As virtual globes become widely adopted by the general public, the use of geospatial data has expanded greatly. With the popularization of Google Earth and other platforms, GIS systems have become virtual reality platforms. Using these platforms, a casual user can easily explore the world, browse massive data-sets, create powerful 3D visualizations, and share those visualizations with millions of people using the KML language. This technology has raised the bar for professionals and academics alike. It is now expected that studies and projects will be accompanied by compelling, high-quality visualizations. In this new landscape, a presentation of geospatial data can be the most effective form of advertisement for a project: engaging both the general public and the scientific community in a unified interactive experience. On the other hand, merely dumping a dataset into a virtual globe can be a disorienting, alienating experience for many users. To create an effective, far-reaching presentation, an author must take care to make their data approachable to a wide variety of users with varying knowledge of the subject matter, expertise in virtual globes, and attention spans. To that end, we present techniques for creating self-guided interactive tours of data represented in KML and visualized in Google Earth. Using these methods, we provide the ability to move the camera through the world while dynamically varying the content, style, and visibility of the displayed data. Such tours can automatically guide users through massive, complex datasets: engaging a broad user-base, and conveying subtle concepts that aren't immediately apparent when viewing the raw data. To the casual user these techniques result in an extremely compelling experience similar to watching video. Unlike video though, these techniques maintain the rich interactive environment provided by the virtual globe, allowing users to explore the data in detail and to add other data sources to the presentation.

  19. CytoCom: a Cytoscape app to visualize, query and analyse disease comorbidity networks.

    PubMed

    Moni, Mohammad Ali; Xu, Haoming; Liò, Pietro

    2015-03-15

    CytoCom is an interactive plugin for Cytoscape that can be used to search, explore, analyse and visualize human disease comorbidity network. It represents disease-disease associations in terms of bipartite graphs and provides International Classification of Diseases, Ninth Revision (ICD9)-centric and disease name centric views of disease information. It allows users to find associations between diseases based on the two measures: Relative Risk (RR) and [Formula: see text]-correlation values. In the disease network, the size of each node is based on the prevalence of that disease. CytoCom is capable of clustering disease network based on the ICD9 disease category. It provides user-friendly access that facilitates exploration of human diseases, and finds additional associated diseases by double-clicking a node in the existing network. Additional comorbid diseases are then connected to the existing network. It is able to assist users for interpretation and exploration of the human diseases by a variety of built-in functions. Moreover, CytoCom permits multi-colouring of disease nodes according to standard disease classification for expedient visualization. © The Author 2014. Published by Oxford University Press.

  20. Augmented Visual Experience of Simulated Solar Phenomena

    NASA Astrophysics Data System (ADS)

    Tucker, A. O., IV; Berardino, R. A.; Hahne, D.; Schreurs, B.; Fox, N. J.; Raouafi, N.

    2017-12-01

    The Parker Solar Probe (PSP) mission will explore the Sun's corona, studying solar wind, flares and coronal mass ejections. The effects of these phenomena can impact the technology that we use in ways that are not readily apparent, including affecting satellite communications and power grids. Determining the structure and dynamics of corona magnetic fields, tracing the flow of energy that heats the corona, and exploring dusty plasma near the Sun to understand its influence on solar wind and energetic particle formation requires a suite of sensors on board the PSP spacecraft that are engineered to observe specific phenomena. Using models of these sensors and simulated observational data, we can visualize what the PSP spacecraft will "see" during its multiple passes around the Sun. Augmented reality (AR) technologies enable convenient user access to massive data sets. We are developing an application that allows users to experience environmental data from the point of view of the PSP spacecraft in AR using the Microsoft HoloLens. Observational data, including imagery, magnetism, temperature, and density are visualized in 4D within the user's immediate environment. Our application provides an educational tool for comprehending the complex relationships of observational data, which aids in our understanding of the Sun.

  1. Using the Browser for Science: A Collaborative Toolkit for Astronomy

    NASA Astrophysics Data System (ADS)

    Connolly, A. J.; Smith, I.; Krughoff, K. S.; Gibson, R.

    2011-07-01

    Astronomical surveys have yielded hundreds of terabytes of catalogs and images that span many decades of the electromagnetic spectrum. Even when observatories provide user-friendly web interfaces, exploring these data resources remains a complex and daunting task. In contrast, gadgets and widgets have become popular in social networking (e.g. iGoogle, Facebook). They provide a simple way to make complex data easily accessible that can be customized based on the interest of the user. With ASCOT (an AStronomical COllaborative Toolkit) we expand on these concepts to provide a customizable and extensible gadget framework for use in science. Unlike iGoogle, where all of the gadgets are independent, the gadgets we develop communicate and share information, enabling users to visualize and interact with data through multiple, simultaneous views. With this approach, web-based applications for accessing and visualizing data can be generated easily and, by linking these tools together, integrated and powerful data analysis and discovery tools can be constructed.

  2. Field observations of display placement requirements and character size for presbyopic and prepresbyopic computer users.

    PubMed

    Bartha, Michael C; Allie, Paul; Kokot, Douglas; Roe, Cynthia Purvis

    2015-01-01

    Computer users continue to report eye and upper body discomfort even as workstation flexibility has improved. Research shows a relationship between character size, viewing distance, and reading performance. Few reports exist regarding text height viewed under normal office work conditions and eye discomfort. This paper reports self-selected computer display placement, text characteristics, and subjective comfort for older and younger computer workers under real-world conditions. Computer workers were provided with monitors and adjustable display support(s). In Study 1, older workers wearing progressive-addition lenses (PALs) were observed. In study 2, older workers wearing multifocal lenses and younger workers were observed. Workers wearing PALs experienced less eye and body discomfort with adjustable displays, and less eye and neck discomfort for text visual angles near or greater than ergonomic recommendations. Older workers wearing multifocal correction positioned displays much lower than younger workers. In general, computer users did not adjust character size to ensure that fovial images of text fell within the recommended range. Ergonomic display placement recommendations should be different for computer users wearing multifocal correction for presbyopia. Ergonomic training should emphasize adjusting text size for user comfort.

  3. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  4. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE PAGES

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...

    2018-02-14

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  5. Computer vision syndrome: A review.

    PubMed

    Gowrisankaran, Sowjanya; Sheedy, James E

    2015-01-01

    Computer vision syndrome (CVS) is a collection of symptoms related to prolonged work at a computer display. This article reviews the current knowledge about the symptoms, related factors and treatment modalities for CVS. Relevant literature on CVS published during the past 65 years was analyzed. Symptoms reported by computer users are classified into internal ocular symptoms (strain and ache), external ocular symptoms (dryness, irritation, burning), visual symptoms (blur, double vision) and musculoskeletal symptoms (neck and shoulder pain). The major factors associated with CVS are either environmental (improper lighting, display position and viewing distance) and/or dependent on the user's visual abilities (uncorrected refractive error, oculomotor disorders and tear film abnormalities). Although the factors associated with CVS have been identified the physiological mechanisms that underlie CVS are not completely understood. Additionally, advances in technology have led to the increased use of hand-held devices, which might impose somewhat different visual challenges compared to desktop displays. Further research is required to better understand the physiological mechanisms underlying CVS and symptoms associated with the use of hand-held and stereoscopic displays.

  6. Event Display for the Visualization of CMS Events

    NASA Astrophysics Data System (ADS)

    Bauerdick, L. A. T.; Eulisse, G.; Jones, C. D.; Kovalskyi, D.; McCauley, T.; Mrak Tadel, A.; Muelmenstaedt, J.; Osborne, I.; Tadel, M.; Tu, Y.; Yagil, A.

    2011-12-01

    During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.

  7. Interactive visualization and analysis of multimodal datasets for surgical applications.

    PubMed

    Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James

    2012-12-01

    Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.

  8. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

    PubMed

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-04-14

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.

  9. Memory for Drug Related Visual Stimuli in Young Adult, Cocaine Dependent Polydrug Users

    PubMed Central

    Ray, Suchismita; Pandina, Robert; Bates, Marsha E.

    2015-01-01

    Background and Objectives Implicit (unconscious) and explicit (conscious) memory associations with drugs have been examined primarily using verbal cues. However, drug seeking, drug use behaviors, and relapse in chronic cocaine and other drug users are frequently triggered by viewing substance related visual cues in the environment. We thus examined implicit and explicit memory for drug picture cues to understand the relative extent to which conscious and unconscious memory facilitation of visual drug cues occurs during cocaine dependence. Methods Memory for drug related and neutral picture cues was assessed in 14 inpatient cocaine dependent polydrug users and a comparison group of 21 young adults with limited drug experience (N = 35). Participants completed picture cue exposure, free recall and recognition tasks to assess explicit memory, and a repetition priming task to assess implicit memory. Results Drug cues, compared to neutral cues were better explicitly recalled and implicitly primed, and especially so in the cocaine group. In contrast, neutral cues were better explicitly recognized, and especially in the control group. Conclusion Certain forms of explicit and implicit memory for drug cues were enhanced in cocaine users compared to controls when memory was tested a short time following cue exposure. Enhanced unconscious memory processing of drug cues in chronic cocaine users may be a behavioral manifestation of heightened drug cue salience that supports drug seeking and taking. There may be value in expanding intervention techniques to utilize cocaine users’ implicit memory system. PMID:24588421

  10. Tools and Methods for Visualization of Mesoscale Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Liu, L.; Silver, D.; Kang, D.; Curchitser, E.

    2017-12-01

    Mesoscale ocean eddies form in the Gulf Stream and transport heat and nutrients across the ocean basin. The internal structure of these three-dimensional eddies and the kinematics with which they move are critical to a full understanding of their transport capacity. A series of visualization tools have been developed to extract, characterize, and track ocean eddies from 3D modeling results, to visually show the ocean eddy story by applying various illustrative visualization techniques, and to interactively view results stored on a server from a conventional browser. In this work, we apply a feature-based method to track instances of ocean eddies through the time steps of a high-resolution multidecadal regional ocean model and generate a series of eddy paths which reflect the life cycle of individual eddy instances. The basic method uses the Okubu-Weiss parameter to define eddy cores but could be adapted to alternative specifications of an eddy. Stored results include pixel-lists for each eddy instance, tracking metadata for eddy paths, and physical and geometric properties. In the simplest view, isosurfaces are used to display eddies along an eddy path. Individual eddies can then be selected and viewed independently or an eddy path can be viewed in the context of all eddy paths (longer than a specified duration) and the ocean basin. To tell the story of mesoscale ocean eddies, we combined illustrative visualization techniques, including visual effectiveness enhancement, focus+context, and smart visibility, with the extracted volume features to explore eddy characteristics at multiple scales from ocean basin to individual eddy. An evaluation by domain experts indicates that combining our feature-based techniques with illustrative visualization techniques provides an insight into the role eddies play in ocean circulation. A web-based GUI is under development to facilitate easy viewing of stored results. The GUI provides the user control to choose amongst available datasets, to specify the variables (such as temperature or salinity) to display on the isosurfaces, and to choose the scale and orientation of the view. These techniques allow an oceanographer to browse the data based on eddy paths and individual eddies rather than slices or volumes of data.

  11. GVS - GENERAL VISUALIZATION SYSTEM

    NASA Technical Reports Server (NTRS)

    Keith, S. R.

    1994-01-01

    The primary purpose of GVS (General Visualization System) is to support scientific visualization of data output by the panel method PMARC_12 (inventory number ARC-13362) on the Silicon Graphics Iris computer. GVS allows the user to view PMARC geometries and wakes as wire frames or as light shaded objects. Additionally, geometries can be color shaded according to phenomena such as pressure coefficient or velocity. Screen objects can be interactively translated and/or rotated to permit easy viewing. Keyframe animation is also available for studying unsteady cases. The purpose of scientific visualization is to allow the investigator to gain insight into the phenomena they are examining, therefore GVS emphasizes analysis, not artistic quality. GVS uses existing IRIX 4.0 image processing tools to allow for conversion of SGI RGB files to other formats. GVS is a self-contained program which contains all the necessary interfaces to control interaction with PMARC data. This includes 1) the GVS Tool Box, which supports color histogram analysis, lighting control, rendering control, animation, and positioning, 2) GVS on-line help, which allows the user to access control elements and get information about each control simultaneously, and 3) a limited set of basic GVS data conversion filters, which allows for the display of data requiring simpler data formats. Specialized controls for handling PMARC data include animation and wakes, and visualization of off-body scan volumes. GVS is written in C-language for use on SGI Iris series computers running IRIX. It requires 28Mb of RAM for execution. Two separate hardcopy documents are available for GVS. The basic document price for ARC-13361 includes only the GVS User's Manual, which outlines major features of the program and provides a tutorial on using GVS with PMARC_12 data. Programmers interested in modifying GVS for use with data in formats other than PMARC_12 format may purchase a copy of the draft GVS 3.1 Software Maintenance Manual separately, if desired, for $26. An electronic copy of the User's Manual, in Macintosh Word format, is included on the distribution media. Purchasers of GVS are advised that changes and extensions to GVS are made at their own risk. In addition, GVS includes an on-line help system and sample input files. The standard distribution medium for GVS is a .25 inch streaming magnetic tape cartridge in IRIX tar format. GVS was developed in 1992.

  12. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  13. ADS Bumblebee comes of age

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto; Kurtz, Michael J.; Henneken, Edwin; Grant, Carolyn S.; Thompson, Donna M.; Chyla, Roman; McDonald, Steven; Shaulis, Taylor J.; Blanco-Cuaresma, Sergi; Shapurian, Golnaz; Hostetler, Timothy W.; Templeton, Matthew R.; Lockhart, Kelly E.

    2018-01-01

    The ADS Team has been working on a new system architecture and user interface named “ADS Bumblebee” since 2015. The new system presents many advantages over the traditional ADS interface and search engine (“ADS Classic”). A new, state of the art search engine features a number of new capabilities such as full-text search, advanced citation queries, filtering of results and scalable analytics for any search results. Its services are built on a cloud computing platform which can be easily scaled to match user demand. The Bumblebee user interface is a rich javascript application which leverages the features of the search engine and integrates a number of additional visualizations such as co-author and co-citation networks which provide a hierarchical view of research groups and research topics, respectively. Displays of paper analytics provide views of the basic article metrics (citations, reads, and age). All visualizations are interactive and provide ways to further refine search results. This new search system, which has been in beta for the past three years, has now matured to the point that it provides feature and content parity with ADS Classic, and has become the recommended way to access ADS content and services. Following a successful transition to Bumblebee, the use of ADS Classic will be discouraged starting in 2018 and phased out in 2019. You can access our new interface at https://ui.adsabs.harvard.edu

  14. The Effects of Varying Electronic Cigarette Warning Label Design Features On Attention, Recall, and Product Perceptions Among Young Adults.

    PubMed

    Mays, Darren; Villanti, Andrea; Niaura, Raymond S; Lindblom, Eric N; Strasser, Andrew A

    2017-12-13

    This study was a 3 (Brand: Blu, MarkTen, Vuse) by 3 (Warning Size: 20%, 30%, or 50% of advertisement surface) by 2 (Warning Background: White, Red) experimental investigation of the effects of electronic cigarette (e-cigarette) warning label design features. Young adults aged 18-30 years (n = 544) were recruited online, completed demographic and tobacco use history measures, and randomized to view e-cigarette advertisements with warning labels that varied by the experimental conditions. Participants completed a task assessing self-reported visual attention to advertisements with a-priori regions of interest defined around warning labels. Warning message recall and perceived addictiveness of e-cigarettes were assessed post-exposure. Approximately half of participants reported attending to warning labels and reported attention was greater for warnings on red versus white backgrounds. Recall of the warning message content was also greater among those reporting attention to the warning label. Overall, those who viewed warnings on red backgrounds reported lower perceived addictiveness than those who viewed warnings on white backgrounds, and e-cigarette users reported lower perceived addictiveness than non-users. Among e-cigarette users, viewing warnings on white backgrounds produced perceptions more similar to non-users. Greater recall was significantly correlated with greater perceived addictiveness. This study provides some of the first evidence that e-cigarette warning label design features including size and coloring affect self-reported attention and content recall.

  15. A Visualization Tool for Integrating Research Results at an Underground Mine

    NASA Astrophysics Data System (ADS)

    Boltz, S.; Macdonald, B. D.; Orr, T.; Johnson, W.; Benton, D. J.

    2016-12-01

    Researchers with the National Institute for Occupational Safety and Health are conducting research at a deep, underground metal mine in Idaho to develop improvements in ground control technologies that reduce the effects of dynamic loading on mine workings, thereby decreasing the risk to miners. This research is multifaceted and includes: photogrammetry, microseismic monitoring, geotechnical instrumentation, and numerical modeling. When managing research involving such a wide range of data, understanding how the data relate to each other and to the mining activity quickly becomes a daunting task. In an effort to combine this diverse research data into a single, easy-to-use system, a three-dimensional visualization tool was developed. The tool was created using the Unity3d video gaming engine and includes the mine development entries, production stopes, important geologic structures, and user-input research data. The tool provides the user with a first-person, interactive experience where they are able to walk through the mine as well as navigate the rock mass surrounding the mine to view and interpret the imported data in the context of the mine and as a function of time. The tool was developed using data from a single mine; however, it is intended to be a generic tool that can be easily extended to other mines. For example, a similar visualization tool is being developed for an underground coal mine in Colorado. The ultimate goal is for NIOSH researchers and mine personnel to be able to use the visualization tool to identify trends that may not otherwise be apparent when viewing the data separately. This presentation highlights the features and capabilities of the mine visualization tool and explains how it may be used to more effectively interpret data and reduce the risk of ground fall hazards to underground miners.

  16. GeoMapApp, Virtual Ocean, and other Free Data Resources for the 21st Century Classroom

    NASA Astrophysics Data System (ADS)

    Goodwillie, A. M.; Ryan, W.; Carbotte, S.; Melkonian, A.; Coplan, J.; Arko, R.; Ferrini, V.; O'Hara, S.; Leung, A.; Bonckzowski, J.

    2008-12-01

    With funding from the U.S. National Science Foundation, the Marine Geoscience Data System (MGDS) (http://www.marine-geo.org/) is developing GeoMapApp (http://www.geomapapp.org) - a computer application that provides wide-ranging map-based visualization and manipulation options for interdisciplinary geosciences research and education. The novelty comes from the use of this visual tool to discover and explore data, with seamless links to further discovery using traditional text-based approaches. Users can generate custom maps and grids and import their own data sets. Built-in functionality allows users to readily explore a broad suite of interactive data sets and interfaces. Examples include multi-resolution global digital models of topography, gravity, sediment thickness, and crustal ages; rock, fluid, biology and sediment sample information; research cruise underway geophysical and multibeam data; earthquake events; submersible dive photos of hydrothermal vents; geochemical analyses; DSDP/ODP core logs; seismic reflection profiles; contouring, shading, profiling of grids; and many more. On-line audio-visual tutorials lead users step-by-step through GeoMapApp functionality (http://www.geomapapp.org/tutorials/). Virtual Ocean (http://www.virtualocean.org/) integrates GeoMapApp with a 3-D earth browser based upon NASA WorldWind, providing yet more powerful capabilities. The searchable MGDS Media Bank (http://media.marine-geo.org/) supports viewing of remarkable images and video from the NSF Ridge 2000 and MARGINS programs. For users familiar with Google Earth (tm), KML files are available for viewing several MGDS data sets (http://www.marine-geo.org/education/kmls.php). Examples of accessing and manipulating a range of geoscience data sets from various NSF-funded programs will be shown. GeoMapApp, Virtual Ocean, the MGDS Media Bank and KML files are free MGDS data resources and work on any type of computer. They are currently used by educators, researchers, school teachers and the general public.

  17. WEB-GIS Decision Support System for CO2 storage

    NASA Astrophysics Data System (ADS)

    Gaitanaru, Dragos; Leonard, Anghel; Radu Gogu, Constantin; Le Guen, Yvi; Scradeanu, Daniel; Pagnejer, Mihaela

    2013-04-01

    Environmental decision support systems (DSS) paradigm evolves and changes as more knowledge and technology become available to the environmental community. Geographic Information Systems (GIS) can be used to extract, assess and disseminate some types of information, which are otherwise difficult to access by traditional methods. In the same time, with the help of the Internet and accompanying tools, creating and publishing online interactive maps has become easier and rich with options. The Decision Support System (MDSS) developed for the MUSTANG (A MUltiple Space and Time scale Approach for the quaNtification of deep saline formations for CO2 storaGe) project is a user friendly web based application that uses the GIS capabilities. MDSS can be exploited by the experts for CO2 injection and storage in deep saline aquifers. The main objective of the MDSS is to help the experts to take decisions based large structured types of data and information. In order to achieve this objective the MDSS has a geospatial objected-orientated database structure for a wide variety of data and information. The entire application is based on several principles leading to a series of capabilities and specific characteristics: (i) Open-Source - the entire platform (MDSS) is based on open-source technologies - (1) database engine, (2) application server, (3) geospatial server, (4) user interfaces, (5) add-ons, etc. (ii) Multiple database connections - MDSS is capable to connect to different databases that are located on different server machines. (iii)Desktop user experience - MDSS architecture and design follows the structure of a desktop software. (iv)Communication - the server side and the desktop are bound together by series functions that allows the user to upload, use, modify and download data within the application. The architecture of the system involves one database and a modular application composed by: (1) a visualization module, (2) an analysis module, (3) a guidelines module, and (4) a risk assessment module. The Database component is build by using the PostgreSQL and PostGIS open source technology. The visualization module allows the user to view data of CO2 injection sites in different ways: (1) geospatial visualization, (2) table view, (3) 3D visualization. The analysis module will allow the user to perform certain analysis like Injectivity, Containment and Capacity analysis. The Risk Assessment module focus on the site risk matrix approach. The Guidelines module contains the methodologies of CO2 injection and storage into deep saline aquifers guidelines.

  18. Web-based Collaboration and Visualization in the ANDRILL Program

    NASA Astrophysics Data System (ADS)

    Reed, J.; Rack, F. R.; Huffman, L. T.; Cattadori, M.

    2009-12-01

    ANDRILL has embraced the web as a platform for facilitating collaboration and communicating science with educators, students and researchers alike. Two recent ANDRILL education and outreach projects, Project Circle 2008 and the Climate Change Student Summit, brought together classrooms from around the world to participate in cutting edge science. A large component of each project was the online collaboration achieved through project websites, blogs, and the GroupHub--a secure online environment where students could meet to send messages, exchange presentations and pictures, and even chat live. These technologies enabled students from different countries and time zones to connect and participate in a shared 'conversation' about climate change research. ANDRILL has also developed several interactive, web-based visualizations to make scientific drilling data more engaging and accessible to the science community and the public. Each visualization is designed around three core concepts that enable the Web 2.0 platform, namely, that they are: (1) customizable - a user can customize the visualization to display the exact data she is interested in; (2) linkable - each view in the visualization has a distinct URL that the user can share with her friends via sites like Facebook and Twitter; and (3) mashable - the user can take the visualization, mash it up with data from other sites or her own research, and embed it in her blog or website. The web offers an ideal environment for visualization and collaboration because it requires no special software and works across all computer platforms, which allows organizations and research projects to engage much larger audiences. In this presentation we will describe past challenges and successes, as well as future plans.

  19. Privacy-preserving photo sharing based on a public key infrastructure

    NASA Astrophysics Data System (ADS)

    Yuan, Lin; McNally, David; Küpçü, Alptekin; Ebrahimi, Touradj

    2015-09-01

    A significant number of pictures are posted to social media sites or exchanged through instant messaging and cloud-based sharing services. Most social media services offer a range of access control mechanisms to protect users privacy. As it is not in the best interest of many such services if their users restrict access to their shared pictures, most services keep users' photos unprotected which makes them available to all insiders. This paper presents an architecture for a privacy-preserving photo sharing based on an image scrambling scheme and a public key infrastructure. A secure JPEG scrambling is applied to protect regional visual information in photos. Protected images are still compatible with JPEG coding and therefore can be viewed by any one on any device. However, only those who are granted secret keys will be able to descramble the photos and view their original versions. The proposed architecture applies an attribute-based encryption along with conventional public key cryptography, to achieve secure transmission of secret keys and a fine-grained control over who may view shared photos. In addition, we demonstrate the practical feasibility of the proposed photo sharing architecture with a prototype mobile application, ProShare, which is built based on iOS platform.

  20. A Visual Basic program for analyzing oedometer test results and evaluating intergranular void ratio

    NASA Astrophysics Data System (ADS)

    Monkul, M. Murat; Önal, Okan

    2006-06-01

    A visual basic program (POCI) is proposed and explained in order to analyze oedometer test results. Oedometer test results have vital importance from geotechnical point of view, since settlement requirements usually control the design of foundations. The software POCI is developed in order perform the necessary calculations for convential oedometer test. The change of global void ratio and stress-strain characteristics can be observed both numerically and graphically. It enables the users to calculate some parameters such as coefficient of consolidation, compression index, recompression index, and preconsolidation pressure depending on the type and stress history of the soil. Moreover, it adopts the concept of intergranular void ratio which may be important especially in the compression behavior of sandy soils. POCI shows the variation of intergranular void ratio and also enables the users to calculate granular compression index.

  1. AllAboard: Visual Exploration of Cellphone Mobility Data to Optimise Public Transport.

    PubMed

    Di Lorenzo, G; Sbodio, M; Calabrese, F; Berlingerio, M; Pinelli, F; Nair, R

    2016-02-01

    The deep penetration of mobile phones offers cities the ability to opportunistically monitor citizens' mobility and use data-driven insights to better plan and manage services. With large scale data on mobility patterns, operators can move away from the costly, mostly survey based, transportation planning processes, to a more data-centric view, that places the instrumented user at the center of development. In this framework, using mobile phone data to perform transit analysis and optimization represents a new frontier with significant societal impact, especially in developing countries. In this paper we present AllAboard, an intelligent tool that analyses cellphone data to help city authorities in visually exploring urban mobility and optimizing public transport. This is performed within a self contained tool, as opposed to the current solutions which rely on a combination of several distinct tools for analysis, reporting, optimisation and planning. An interactive user interface allows transit operators to visually explore the travel demand in both space and time, correlate it with the transit network, and evaluate the quality of service that a transit network provides to the citizens at very fine grain. Operators can visually test scenarios for transit network improvements, and compare the expected impact on the travellers' experience. The system has been tested using real telecommunication data for the city of Abidjan, Ivory Coast, and evaluated from a data mining, optimisation and user prospective.

  2. Stereoscopic visual fatigue assessment and modeling

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Wang, Tingting; Gong, Yue

    2014-03-01

    Evaluation of stereoscopic visual fatigue is one of the focuses in the user experience research. It is measured in either subjective or objective methods. Objective measures are more preferred for their capability to quantify the degree of human visual fatigue without being affected by individual variation. However, little research has been conducted on the integration of objective indicators, or the sensibility of each objective indicator in reflecting subjective fatigue. The paper proposes a simply effective method to evaluate visual fatigue more objectively. The stereoscopic viewing process is divided into series of sessions, after each of which viewers rate their visual fatigue with subjective scores (SS) according to a five-grading scale, followed by tests of the punctum maximum accommodation (PMA) and visual reaction time (VRT). Throughout the entire viewing process, their eye movements are recorded by an infrared camera. The pupil size (PS) and percentage of eyelid closure over the pupil over time (PERCLOS) are extracted from the videos processed by the algorithm. Based on the method, an experiment with 14 subjects was conducted to assess visual fatigue induced by 3D images on polarized 3D display. The experiment consisted of 10 sessions (5min per session), each containing the same 75 images displayed randomly. The results show that PMA, VRT and PERCLOS are the most efficient indicators of subjective visual fatigue and finally a predictive model is derived from the stepwise multiple regressions.

  3. Visualizing common operating picture of critical infrastructure

    NASA Astrophysics Data System (ADS)

    Rummukainen, Lauri; Oksama, Lauri; Timonen, Jussi; Vankka, Jouko

    2014-05-01

    This paper presents a solution for visualizing the common operating picture (COP) of the critical infrastructure (CI). The purpose is to improve the situational awareness (SA) of the strategic-level actor and the source system operator in order to support decision making. The information is obtained through the Situational Awareness of Critical Infrastructure and Networks (SACIN) framework. The system consists of an agent-based solution for gathering, storing, and analyzing the information, and a user interface (UI) is presented in this paper. The UI consists of multiple views visualizing information from the CI in different ways. Different CI actors are categorized in 11 separate sectors, and events are used to present meaningful incidents. Past and current states, together with geographical distribution and logical dependencies, are presented to the user. The current states are visualized as segmented circles to represent event categories. Geographical distribution of assets is displayed with a well-known map tool. Logical dependencies are presented in a simple directed graph, and users also have a timeline to review past events. The objective of the UI is to provide an easily understandable overview of the CI status. Therefore, testing methods, such as a walkthrough, an informal walkthrough, and the Situation Awareness Global Assessment Technique (SAGAT), were used in the evaluation of the UI. Results showed that users were able to obtain an understanding of the current state of CI, and the usability of the UI was rated as good. In particular, the designated display for the CI overview and the timeline were found to be efficient.

  4. Case study of visualizing global user download patterns using Google Earth and NASA World Wind

    NASA Astrophysics Data System (ADS)

    Zong, Ziliang; Job, Joshua; Zhang, Xuesong; Nijim, Mais; Qin, Xiao

    2012-01-01

    Geo-visualization is significantly changing the way we view spatial data and discover information. On the one hand, a large number of spatial data are generated every day. On the other hand, these data are not well utilized due to the lack of free and easily used data-visualization tools. This becomes even worse when most of the spatial data remains in the form of plain text such as log files. This paper describes a way of visualizing massive plain-text spatial data at no cost by utilizing Google Earth and NASA World Wind. We illustrate our methods by visualizing over 170,000 global download requests for satellite images maintained by the Earth Resources Observation and Science (EROS) Center of U.S. Geological Survey (USGS). Our visualization results identify the most popular satellite images around the world and discover the global user download patterns. The benefits of this research are: 1. assisting in improving the satellite image downloading services provided by USGS, and 2. providing a proxy for analyzing the "hot spot" areas of research. Most importantly, our methods demonstrate an easy way to geo-visualize massive textual spatial data, which is highly applicable to mining spatially referenced data and information on a wide variety of research domains (e.g., hydrology, agriculture, atmospheric science, natural hazard, and global climate change).

  5. Automated UAV-based video exploitation using service oriented architecture framework

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  6. Toyz: A framework for scientific analysis of large datasets and astronomical images

    NASA Astrophysics Data System (ADS)

    Moolekamp, F.; Mamajek, E.

    2015-11-01

    As the size of images and data products derived from astronomical data continues to increase, new tools are needed to visualize and interact with that data in a meaningful way. Motivated by our own astronomical images taken with the Dark Energy Camera (DECam) we present Toyz, an open source Python package for viewing and analyzing images and data stored on a remote server or cluster. Users connect to the Toyz web application via a web browser, making it ​a convenient tool for students to visualize and interact with astronomical data without having to install any software on their local machines. In addition it provides researchers with an easy-to-use tool that allows them to browse the files on a server and quickly view very large images (>2 Gb) taken with DECam and other cameras with a large FOV and create their own visualization tools that can be added on as extensions to the default Toyz framework.

  7. Rating knowledge sharing in cross-domain collaborative filtering.

    PubMed

    Li, Bin; Zhu, Xingquan; Li, Ruijiang; Zhang, Chengqi

    2015-05-01

    Cross-domain collaborative filtering (CF) aims to share common rating knowledge across multiple related CF domains to boost the CF performance. In this paper, we view CF domains as a 2-D site-time coordinate system, on which multiple related domains, such as similar recommender sites or successive time-slices, can share group-level rating patterns. We propose a unified framework for cross-domain CF over the site-time coordinate system by sharing group-level rating patterns and imposing user/item dependence across domains. A generative model, say ratings over site-time (ROST), which can generate and predict ratings for multiple related CF domains, is developed as the basic model for the framework. We further introduce cross-domain user/item dependence into ROST and extend it to two real-world cross-domain CF scenarios: 1) ROST (sites) for alleviating rating sparsity in the target domain, where multiple similar sites are viewed as related CF domains and some items in the target domain depend on their correspondences in the related ones; and 2) ROST (time) for modeling user-interest drift over time, where a series of time-slices are viewed as related CF domains and a user at current time-slice depends on herself in the previous time-slice. All these ROST models are instances of the proposed unified framework. The experimental results show that ROST (sites) can effectively alleviate the sparsity problem to improve rating prediction performance and ROST (time) can clearly track and visualize user-interest drift over time.

  8. New Applications for the Testing and Visualization of Wireless Networks

    NASA Technical Reports Server (NTRS)

    Griffin, Robert I.; Cauley, Michael A.; Pleva, Michael A.; Seibert, Marc A.; Lopez, Isaac

    2005-01-01

    Traditional techniques for examining wireless networks use physical link characteristics such as Signal-to-Noise (SNR) ratios to assess the performance of wireless networks. Such measurements may not be reliable indicators of available bandwidth. This work describes two new software applications developed at NASA Glenn Research Center for the investigation of wireless networks. GPSIPerf combines measurements of Transmission Control Protocol (TCP) throughput with Global Positioning System (GPS) coordinates to give users a map of wireless bandwidth for outdoor environments where a wireless infrastructure has been deployed. GPSIPerfView combines the data provided by GPSIPerf with high-resolution digital elevation maps (DEM) to help users visualize and assess the impact of elevation features on wireless networks in a given sample area. These applications were used to examine TCP throughput in several wireless network configurations at desert field sites near Hanksville, Utah during May of 2004. Use of GPSIPerf and GPSIPerfView provides a geographically referenced picture of the extent and deterioration of TCP throughput in tested wireless network configurations. GPSIPerf results from field-testing in Utah suggest that it can be useful in assessing other wireless network architectures, and may be useful to future human-robotic exploration missions.

  9. Pathway collages: personalized multi-pathway diagrams.

    PubMed

    Paley, Suzanne; O'Maille, Paul E; Weaver, Daniel; Karp, Peter D

    2016-12-13

    Metabolic pathway diagrams are a classical way of visualizing a linked cascade of biochemical reactions. However, to understand some biochemical situations, viewing a single pathway is insufficient, whereas viewing the entire metabolic network results in information overload. How do we enable scientists to rapidly construct personalized multi-pathway diagrams that depict a desired collection of interacting pathways that emphasize particular pathway interactions? We define software for constructing personalized multi-pathway diagrams called pathway-collages using a combination of manual and automatic layouts. The user specifies a set of pathways of interest for the collage from a Pathway/Genome Database. Layouts for the individual pathways are generated by the Pathway Tools software, and are sent to a Javascript Pathway Collage application implemented using Cytoscape.js. That application allows the user to re-position pathways; define connections between pathways; change visual style parameters; and paint metabolomics, gene expression, and reaction flux data onto the collage to obtain a desired multi-pathway diagram. We demonstrate the use of pathway collages in two application areas: a metabolomics study of pathogen drug response, and an Escherichia coli metabolic model. Pathway collages enable facile construction of personalized multi-pathway diagrams.

  10. Exploring virtual worlds with head-mounted displays

    NASA Astrophysics Data System (ADS)

    Chung, James C.; Harris, Mark R.; Brooks, F. P.; Fuchs, Henry; Kelley, Michael T.

    1989-02-01

    Research has been conducted in the use of simple head mounted displays in real world applications. Such units provide the user with non-holographic true 3-D information, since the kinetic depth effect, stereoscopy, and other visual cues combine to immerse the user in a virtual world which behaves like the real world in some respects. UNC's head mounted display was built inexpensively from commercially available off-the-shelf components. Tracking of the user's head position and orientation is performed by a Polhemus Navigation Sciences' 3SPACE tracker. The host computer uses the tracking information to generate updated images corresponding to the user's new left eye and right eye views. The images are broadcast to two liquid crystal television screens (220x320 pixels) mounted on a horizontal shelf at the user's forehead. The user views these color screens through half-silvered mirrors, enabling the computer generated image to be superimposed upon the user's real physical environment. The head mounted display was incorporated into existing molecular and architectural applications being developed at UNC. In molecular structure studies, chemists are presented with a room sized molecule with which they can interact in a manner more intuitive than that provided by conventional 2-D displays and dial boxes. Walking around and through the large molecule may provide quicker understanding of its structure, and such problems as drug enzyme docking may be approached with greater insight.

  11. ATS displays: A reasoning visualization tool for expert systems

    NASA Technical Reports Server (NTRS)

    Selig, William John; Johannes, James D.

    1990-01-01

    Reasoning visualization is a useful tool that can help users better understand the inherently non-sequential logic of an expert system. While this is desirable in most all expert system applications, it is especially so for such critical systems as those destined for space-based operations. A hierarchical view of the expert system reasoning process and some characteristics of these various levels is presented. Also presented are Abstract Time Slice (ATS) displays, a tool to visualize the plethora of interrelated information available at the host inferencing language level of reasoning. The usefulness of this tool is illustrated with some examples from a prototype potable water expert system for possible use aboard Space Station Freedom.

  12. GODIVA2: interactive visualization of environmental data on the Web.

    PubMed

    Blower, J D; Haines, K; Santokhee, A; Liu, C L

    2009-03-13

    GODIVA2 is a dynamic website that provides visual access to several terabytes of physically distributed, four-dimensional environmental data. It allows users to explore large datasets interactively without the need to install new software or download and understand complex data. Through the use of open international standards, GODIVA2 maintains a high level of interoperability with third-party systems, allowing diverse datasets to be mutually compared. Scientists can use the system to search for features in large datasets and to diagnose the output from numerical simulations and data processing algorithms. Data providers around Europe have adopted GODIVA2 as an INSPIRE-compliant dynamic quick-view system for providing visual access to their data.

  13. Iowa Flood Information System

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.; Goska, R.; Mantilla, R.; Weber, L. J.; Young, N.

    2011-12-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 500 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.

  14. Flood Risk Management in Iowa through an Integrated Flood Information System

    NASA Astrophysics Data System (ADS)

    Demir, Ibrahim; Krajewski, Witold

    2013-04-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 1100 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview and live demonstration of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.

  15. Bird's Eye View - A 3-D Situational Awareness Tool for the Space Station

    NASA Technical Reports Server (NTRS)

    Dershowitz, Adam; Chamitoff, Gregory

    2002-01-01

    Even as space-qualified computer hardware lags well behind the latest home computers, the possibility of using high-fidelity interactive 3-D graphics for displaying important on board information has finally arrived, and is being used on board the International Space Station (ISS). With the quantity and complexity of space-flight telemetry, 3-D displays can greatly enhance the ability of users, both onboard and on the ground, to interpret data quickly and accurately. This is particularly true for data related to vehicle attitude, position, configuration, and relation to other objects on the ground or in-orbit Bird's Eye View (BEV) is a 3-D real-time application that provides a high degree of Situational Awareness for the crew. Its purpose is to instantly convey important motion-related parameters to the crew and mission controllers by presenting 3-D simulated camera views of the International Space Station (ISS) in its actual environment Driven by actual telemetry, and running on board, as well as on the ground, the user can visualize the Space Station relative to the Earth, Sun, stars, various reference frames, and selected targets, such as ground-sites or communication satellites. Since the actual ISS configuration (geometry) is also modeled accurately, everything from the alignment of the solar panels to the expected view from a selected window can be visualized accurately. A virtual representation of the Space Station in real time has many useful applications. By selecting different cameras, the crew or mission control can monitor the station's orientation in space, position over the Earth, transition from day to night, direction to the Sun, the view from a particular window, or the motion of the robotic arm. By viewing the vehicle attitude and solar panel orientations relative to the Sun, the power status of the ISS can be easily visualized and understood. Similarly, the thermal impacts of vehicle attitude can be analyzed and visually confirmed. Communication opportunities can be displayed, and line-of-sight blockage due to interference by the vehicle structure (or the Earth) can be seen easily. Additional features in BEV display targets on the ground and in-orbit, including cities, communication sites, landmarks, satellites, and special sites of scientific interest for Earth observation and photography. Any target can be selected and tracked. This gives the user a continual line-of-sight to the target of current interest, and real-time knowledge about its visibility. Similarly, the vehicle ground-track, and an option to show "visibility circles" around displayed ground sites, provide continuous insight regarding current and future visibility to any target BEV was designed with inputs from many disciplines in the flight control and operations community both at NASA and from the International Partners. As such, BEV is setting the standards for interactive 3-D graphics for spacecraft applications. One important contribution of BEV is a generic graphical interface for camera control that can be used for any 3-D applications. This interface has become part of the International Display and Graphics Standards for the 16-nation ISS partnership. Many other standards related to camera properties, and the display of 3-D data, also have been defined by BEV. Future enhancements to BEV will include capabilities related to simulating ahead of the current time. This will give the user tools for analyzing off-nominal and future scenarios, as well as for planning future operations.

  16. Integrating Patient-Reported Outcomes into Spine Surgical Care through Visual Dashboards: Lessons Learned from Human-Centered Design

    PubMed Central

    Hartzler, Andrea L.; Chaudhuri, Shomir; Fey, Brett C.; Flum, David R.; Lavallee, Danielle

    2015-01-01

    Introduction: The collection of patient-reported outcomes (PROs) draws attention to issues of importance to patients—physical function and quality of life. The integration of PRO data into clinical decisions and discussions with patients requires thoughtful design of user-friendly interfaces that consider user experience and present data in personalized ways to enhance patient care. Whereas most prior work on PROs focuses on capturing data from patients, little research details how to design effective user interfaces that facilitate use of this data in clinical practice. We share lessons learned from engaging health care professionals to inform design of visual dashboards, an emerging type of health information technology (HIT). Methods: We employed human-centered design (HCD) methods to create visual displays of PROs to support patient care and quality improvement. HCD aims to optimize the design of interactive systems through iterative input from representative users who are likely to use the system in the future. Through three major steps, we engaged health care professionals in targeted, iterative design activities to inform the development of a PRO Dashboard that visually displays patient-reported pain and disability outcomes following spine surgery. Findings: Design activities to engage health care administrators, providers, and staff guided our work from design concept to specifications for dashboard implementation. Stakeholder feedback from these health care professionals shaped user interface design features, including predefined overviews that illustrate at-a-glance trends and quarterly snapshots, granular data filters that enable users to dive into detailed PRO analytics, and user-defined views to share and reuse. Feedback also revealed important considerations for quality indicators and privacy-preserving sharing and use of PROs. Conclusion: Our work illustrates a range of engagement methods guided by human-centered principles and design recommendations for optimizing PRO Dashboards for patient care and quality improvement. Engaging health care professionals as stakeholders is a critical step toward the design of user-friendly HIT that is accepted, usable, and has the potential to enhance quality of care and patient outcomes. PMID:25988187

  17. A Prototype Search Toolkit

    NASA Astrophysics Data System (ADS)

    Knepper, Margaret M.; Fox, Kevin L.; Frieder, Ophir

    Information overload is now a reality. We no longer worry about obtaining a sufficient volume of data; we now are concerned with sifting and understanding the massive volumes of data available to us. To do so, we developed an integrated information processing toolkit that provides the user with a variety of ways to view their information. The views include keyword search results, a domain specific ranking system that allows for adaptively capturing topic vocabularies to customize and focus the search results, navigation pages for browsing, and a geospatial and temporal component to visualize results in time and space, and provide “what if” scenario playing. Integrating the information from different tools and sources gives the user additional information and another way to analyze the data. An example of the integration is illustrated on reports of the avian influenza (bird flu).

  18. Development and application of virtual reality for man/systems integration

    NASA Technical Reports Server (NTRS)

    Brown, Marcus

    1991-01-01

    While the graphical presentation of computer models signified a quantum leap over presentations limited to text and numbers, it still has the problem of presenting an interface barrier between the human user and the computer model. The user must learn a command language in order to orient themselves in the model. For example, to move left from the current viewpoint of the model, they might be required to type 'LEFT' at a keyboard. This command is fairly intuitive, but if the viewpoint moves far enough that there are no visual cues overlapping with the first view, the user does not know if the viewpoint has moved inches, feet, or miles to the left, or perhaps remained in the same position, but rotated to the left. Until the user becomes quite familiar with the interface language of the computer model presentation, they will be proned to lossing their bearings frequently. Even a highly skilled user will occasionally get lost in the model. A new approach to presenting type type of information is to directly interpret the user's body motions as the input language for determining what view to present. When the user's head turns 45 degrees to the left, the viewpoint should be rotated 45 degrees to the left. Since the head moves through several intermediate angles between the original view and the final one, several intermediate views should be presented, providing the user with a sense of continuity between the original view and the final one. Since the primary way a human physically interacts with their environment should monitor the movements of the user's hands and alter objects in the virtual model in a way consistent with the way an actual object would move when manipulated using the same hand movements. Since this approach to the man-computer interface closely models the same type of interface that humans have with the physical world, this type of interface is often called virtual reality, and the model is referred to as a virtual world. The task of this summer fellowship was to set up a virtual reality system at MSFC and begin applying it to some of the questions which concern scientists and engineers involved in space flight. A brief discussion of this work is presented.

  19. Computational techniques to enable visualizing shapes of objects of extra spatial dimensions

    NASA Astrophysics Data System (ADS)

    Black, Don Vaughn, II

    Envisioning extra dimensions beyond the three of common experience is a daunting challenge for three dimensional observers. Intuition relies on experience gained in a three dimensional environment. Gaining experience with virtual four dimensional objects and virtual three manifolds in four-space on a personal computer may provide the basis for an intuitive grasp of four dimensions. In order to enable such a capability for ourselves, it is first necessary to devise and implement a computationally tractable method to visualize, explore, and manipulate objects of dimension beyond three on the personal computer. A technology is described in this dissertation to convert a representation of higher dimensional models into a format that may be displayed in realtime on graphics cards available on many off-the-shelf personal computers. As a result, an opportunity has been created to experience the shape of four dimensional objects on the desktop computer. The ultimate goal has been to provide the user a tangible and memorable experience with mathematical models of four dimensional objects such that the user can see the model from any user selected vantage point. By use of a 4D GUI, an arbitrary convex hull or 3D silhouette of the 4D model can be rotated, panned, scrolled, and zoomed until a suitable dimensionally reduced view or Aspect is obtained. The 4D GUI then allows the user to manipulate a 3-flat hyperplane cutting tool to slice the model at an arbitrary orientation and position to extract or "pluck" an embedded 3D slice or "aspect" from the embedding four-space. This plucked 3D aspect can be viewed from all angles via a conventional 3D viewer using three multiple POV viewports, and optionally exported to a third party CAD viewer for further manipulation. Plucking and Manipulating the Aspect provides a tangible experience for the end-user in the same manner as any 3D Computer Aided Design viewing and manipulation tool does for the engineer or a 3D video game provides for the nascent student.

  20. ImageX: new and improved image explorer for astronomical images and beyond

    NASA Astrophysics Data System (ADS)

    Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.

    2016-08-01

    The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan images, and its featureset to include basic functions like image overlay and colormaps. Users needing more advanced visualization and analysis capabilities could use a desktop tool like DS9+IRAF on another IU Trident project called StarDock, without having to download Gigabytes of FITS image data.

  1. A 3-D mixed-reality system for stereoscopic visualization of medical dataset.

    PubMed

    Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco

    2009-11-01

    We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.

  2. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.

    PubMed

    Hohman, Fred; Hodas, Nathan; Chau, Duen Horng

    2017-05-01

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  3. Cognition-based development and evaluation of ergonomic user interfaces for medical image processing and archiving systems.

    PubMed

    Demiris, A M; Meinzer, H P

    1997-01-01

    Whether or not a computerized system enhances the conditions of work in the application domain, very much demands on the user interface. Graphical user interfaces seem to attract the interest of the users but mostly ignore some basic rules of visual information processing thus leading to systems which are difficult to use, lowering productivity and increasing working stress (cognitive and work load). In this work we present some fundamental ergonomic considerations and their application to the medical image processing and archiving domain. We introduce the extensions to an existing concept needed to control and guide the development of GUIs with respect to domain specific ergonomics. The suggested concept, called Model-View-Controller Constraints (MVCC), can be used to programmatically implement ergonomic constraints, and thus has some advantages over written style guides. We conclude with the presentation of existing norms and methods to evaluate user interfaces.

  4. The potential for gaming techniques in radiology education and practice.

    PubMed

    Reiner, Bruce; Siegel, Eliot

    2008-02-01

    Traditional means of communication, education and training, and research have been dramatically transformed with the advent of computerized medicine, and no other medical specialty has been more greatly affected than radiology. Of the myriad of newer computer applications currently available, computer gaming stands out for its unique potential to enhance end-user performance and job satisfaction. Research in other disciplines has demonstrated computer gaming to offer the potential for enhanced decision making, resource management, visual acuity, memory, and motor skills. Within medical imaging, video gaming provides a novel means to enhance radiologist and technologist performance and visual perception by increasing attentional capacity, visual field of view, and visual-motor coordination. These enhancements take on heightened importance with the increasing size and complexity of three-dimensional imaging datasets. Although these operational gains are important in themselves, psychologic gains intrinsic to video gaming offer the potential to reduce stress and improve job satisfaction by creating a fun and engaging means of spirited competition. By creating customized gaming programs and rewards systems, video game applications can be customized to the skill levels and preferences of individual users, thereby creating a comprehensive means to improve individual and collective job performance.

  5. GenomicusPlants: a web resource to study genome evolution in flowering plants.

    PubMed

    Louis, Alexandra; Murat, Florent; Salse, Jérôme; Crollius, Hugues Roest

    2015-01-01

    Comparative genomics combined with phylogenetic reconstructions are powerful approaches to study the evolution of genes and genomes. However, the current rapid expansion of the volume of genomic information makes it increasingly difficult to interrogate, integrate and synthesize comparative genome data while taking into account the maximum breadth of information available. GenomicusPlants (http://www.genomicus.biologie.ens.fr/genomicus-plants) is an extension of the Genomicus webserver that addresses this issue by allowing users to explore flowering plant genomes in an intuitive way, across the broadest evolutionary scales. Extant genomes of 26 flowering plants can be analyzed, as well as 23 ancestral reconstructed genomes. Ancestral gene order provides a long-term chronological view of gene order evolution, greatly facilitating comparative genomics and evolutionary studies. Four main interfaces ('views') are available where: (i) PhyloView combines phylogenetic trees with comparisons of genomic loci across any number of genomes; (ii) AlignView projects loci of interest against all other genomes to visualize its topological conservation; (iii) MatrixView compares two genomes in a classical dotplot representation; and (iv) Karyoview visualizes chromosome karyotypes 'painted' with colours of another genome of interest. All four views are interconnected and benefit from many customizable features. © The Author 2014. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists.

  6. Interactive SIGHT: textual access to simple bar charts

    NASA Astrophysics Data System (ADS)

    Demir, Seniz; Oliver, David; Schwartz, Edward; Elzer, Stephanie; Carberry, Sandra; Mccoy, Kathleen F.; Chester, Daniel

    2010-12-01

    Information graphics, such as bar charts and line graphs, are an important component of many articles from popular media. The majority of such graphics have an intention (a high-level message) to communicate to the graph viewer. Since the intended message of a graphic is often not repeated in the accompanying text, graphics together with the textual segments contribute to the overall purpose of an article and cannot be ignored. Unfortunately, these visual displays are provided in a format which is not readily accessible to everyone. For example, individuals with sight impairments who use screen readers to listen to documents have limited access to the graphics. This article presents a new accessibility tool, the Interactive SIGHT (Summarizing Information GrapHics Textually) system, that is intended to enable visually impaired users to access the knowledge that one would gain from viewing information graphics found on the web. The current system, which is implemented as a browser extension that works on simple bar charts, can be invoked by a user via a keystroke combination while navigating the web. Once launched, Interactive SIGHT first provides a brief summary that conveys the underlying intention of a bar chart along with the chart's most significant and salient features, and then produces history-aware follow-up responses to provide further information about the chart upon request from the user. We present two user studies that were conducted with sighted and visually impaired users to determine how effective the initial summary and follow-up responses are in conveying the informational content of bar charts, and to evaluate how easy it is to use the system interface. The evaluation results are promising and indicate that the system responses are well-structured and enable visually impaired users to answer key questions about bar charts in an easy-to-use manner. Post-experimental interviews revealed that visually impaired participants were very satisfied with the system offering different options to access the content of a chart to meet their specific needs and that they would use Interactive SIGHT if it was publicly available so as not to have to ignore graphics on the web. Being a language based assistive technology designed to compensate for the lack of sight, our work paves the road for a stronger acceptance of natural language interfaces to graph interpretation that we believe will be of great benefit to the visually impaired community.

  7. Precise photorealistic visualization for restoration of historic buildings based on tacheometry data

    NASA Astrophysics Data System (ADS)

    Ragia, Lemonia; Sarri, Froso; Mania, Katerina

    2018-03-01

    This paper puts forward a 3D reconstruction methodology applied to the restoration of historic buildings taking advantage of the speed, range and accuracy of a total geodetic station. The measurements representing geo-referenced points produced an interactive and photorealistic geometric mesh of a monument named `Neoria.' `Neoria' is a Venetian building located by the old harbor at Chania, Crete, Greece. The integration of tacheometry acquisition and computer graphics puts forward a novel integrated software framework for the accurate 3D reconstruction of a historical building. The main technical challenge of this work was the production of a precise 3D mesh based on a sufficient number of tacheometry measurements acquired fast and at low cost, employing a combination of surface reconstruction and processing methods. A fully interactive application based on game engine technologies was developed. The user can visualize and walk through the monument and the area around it as well as photorealistically view it at different times of day and night. Advanced interactive functionalities are offered to the user in relation to identifying restoration areas and visualizing the outcome of such works. The user could visualize the coordinates of the points measured, calculate distances and navigate through the complete 3D mesh of the monument. The geographical data are stored in a database connected with the application. Features referencing and associating the database with the monument are developed. The goal was to utilize a small number of acquired data points and present a fully interactive visualization of a geo-referenced 3D model.

  8. Precise photorealistic visualization for restoration of historic buildings based on tacheometry data

    NASA Astrophysics Data System (ADS)

    Ragia, Lemonia; Sarri, Froso; Mania, Katerina

    2018-04-01

    This paper puts forward a 3D reconstruction methodology applied to the restoration of historic buildings taking advantage of the speed, range and accuracy of a total geodetic station. The measurements representing geo-referenced points produced an interactive and photorealistic geometric mesh of a monument named `Neoria.' `Neoria' is a Venetian building located by the old harbor at Chania, Crete, Greece. The integration of tacheometry acquisition and computer graphics puts forward a novel integrated software framework for the accurate 3D reconstruction of a historical building. The main technical challenge of this work was the production of a precise 3D mesh based on a sufficient number of tacheometry measurements acquired fast and at low cost, employing a combination of surface reconstruction and processing methods. A fully interactive application based on game engine technologies was developed. The user can visualize and walk through the monument and the area around it as well as photorealistically view it at different times of day and night. Advanced interactive functionalities are offered to the user in relation to identifying restoration areas and visualizing the outcome of such works. The user could visualize the coordinates of the points measured, calculate distances and navigate through the complete 3D mesh of the monument. The geographical data are stored in a database connected with the application. Features referencing and associating the database with the monument are developed. The goal was to utilize a small number of acquired data points and present a fully interactive visualization of a geo-referenced 3D model.

  9. Visualisation and interaction design solutions to address specific demands in shared home care.

    PubMed

    Scandurra, Isabella; Hägglund, Maria; Koch, Sabine

    2006-01-01

    When care professionals from different organisations are involved in patient care, their different views on the care process may not be meaningfully integrated. To use visualisation and interaction design solutions addressing the specific demands of shared care in order to support a collaborative work process. Participatory design, comprising interdisciplinary seminar series with real users and iterative prototyping, was applied. A set of interaction and visualisation design solutions to address care professionals' requirements in shared home care is presented, introducing support for identifying origin of information, holistic presentation of information, user group specific visualisation, avoiding cognitive overload, coordination of work and planning, and quick overviews. The design solutions are implemented in an integrated virtual health record system supporting cooperation and coordination in shared home care for the elderly. The described requirements are, however, generalized to comprise all shared care work. The presented design considerations allow healthcare professionals in different organizations to share patient data on mobile devices. Visualization and interaction design facilitates specific work situations and assists in handling specific demands in shared care. The user interface is adapted to different user groups with similar yet distinct needs. Consequently different views supporting cooperative work and presenting shared information in holistic overviews are developed.

  10. View-Dependent Simplification of Arbitrary Polygonal Environments

    DTIC Science & Technology

    2006-01-01

    of backfacing nodes are not rendered [ Kumar 96]. 4.3 Triangle-Budget Simplification The screenspace error threshold and silhouette test allow the user...Greg Turk, and Dinesh Manocha for their invaluable guidance and support throughout this project. Funding for this work was provided by DARPA...Proceedings Visualization 95 , IEEE Computer Society Press (Atlanta, GA), 1995, pp. 296-303. [ Kumar 96] Kumar , Subodh, D. Manocha, W. Garrett, M. Lin

  11. Dakota Graphical User Interface v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman-Hill, Ernest; Glickman, Matthew; Gibson, Marcus

    Graphical analysis environment for Sandia’s Dakota software for optimization and uncertainty quantification. The Dakota GUI is an interactive graphical analysis environment for creating, running, and interpreting Dakota optimization and uncertainty quantification studies. It includes problem (Dakota study) set-up, option specification, simulation interfacing, analysis execution, and results visualization. Through the use of wizards, templates, and views, Dakota GUI helps uses navigate Dakota’s complex capability landscape.

  12. EINVis: a visualization tool for analyzing and exploring genetic interactions in large-scale association studies.

    PubMed

    Wu, Yubao; Zhu, Xiaofeng; Chen, Jian; Zhang, Xiang

    2013-11-01

    Epistasis (gene-gene interaction) detection in large-scale genetic association studies has recently drawn extensive research interests as many complex traits are likely caused by the joint effect of multiple genetic factors. The large number of possible interactions poses both statistical and computational challenges. A variety of approaches have been developed to address the analytical challenges in epistatic interaction detection. These methods usually output the identified genetic interactions and store them in flat file formats. It is highly desirable to develop an effective visualization tool to further investigate the detected interactions and unravel hidden interaction patterns. We have developed EINVis, a novel visualization tool that is specifically designed to analyze and explore genetic interactions. EINVis displays interactions among genetic markers as a network. It utilizes a circular layout (specially, a tree ring view) to simultaneously visualize the hierarchical interactions between single nucleotide polymorphisms (SNPs), genes, and chromosomes, and the network structure formed by these interactions. Using EINVis, the user can distinguish marginal effects from interactions, track interactions involving more than two markers, visualize interactions at different levels, and detect proxy SNPs based on linkage disequilibrium. EINVis is an effective and user-friendly free visualization tool for analyzing and exploring genetic interactions. It is publicly available with detailed documentation and online tutorial on the web at http://filer.case.edu/yxw407/einvis/. © 2013 WILEY PERIODICALS, INC.

  13. Architecture Views Illustrating the Service Automation Aspect of SOA

    NASA Astrophysics Data System (ADS)

    Gu, Qing; Cuadrado, Félix; Lago, Patricia; Duenãs, Juan C.

    Earlier in this book, Chapter 8 provided a detailed analysis of service engineering, including a review of service engineering techniques and methodologies. This chapter is closely related to Chapter 8 as shows how such approaches can be used to develop a service, with particular emphasis on the identification of three views (the automation decision view, degree of service automation view and service automation related data view) that structure and ease elicitation and documentation of stakeholders' concerns. This is carried out through two large case studies to learn the industrial needs in illustrating services deployment and configuration automation. This set of views adds to the more traditional notations like UML, the visual power of attracting the attention of their users to the addressed concerns, and assist them in their work. This is especially crucial in service oriented architecting where service automation is highly demanded.

  14. OLSVis: an animated, interactive visual browser for bio-ontologies

    PubMed Central

    2012-01-01

    Background More than one million terms from biomedical ontologies and controlled vocabularies are available through the Ontology Lookup Service (OLS). Although OLS provides ample possibility for querying and browsing terms, the visualization of parts of the ontology graphs is rather limited and inflexible. Results We created the OLSVis web application, a visualiser for browsing all ontologies available in the OLS database. OLSVis shows customisable subgraphs of the OLS ontologies. Subgraphs are animated via a real-time force-based layout algorithm which is fully interactive: each time the user makes a change, e.g. browsing to a new term, hiding, adding, or dragging terms, the algorithm performs smooth and only essential reorganisations of the graph. This assures an optimal viewing experience, because subsequent screen layouts are not grossly altered, and users can easily navigate through the graph. URL: http://ols.wordvis.com Conclusions The OLSVis web application provides a user-friendly tool to visualise ontologies from the OLS repository. It broadens the possibilities to investigate and select ontology subgraphs through a smooth visualisation method. PMID:22646023

  15. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  16. Development of a Web GIS Application for Visualizing and Analyzing Community Out of Hospital Cardiac Arrest Patterns

    PubMed Central

    Semple, Hugh; Qin, Han; Sasson, Comilla

    2013-01-01

    Improving survival rates at the neighborhood level is increasingly seen as a priority for reducing overall rates of out-of-hospital cardiac arrest (OHCA) in the United States. Since wide disparities exist in OHCA rates at the neighborhood level, it is important for public health officials and residents to be able to quickly locate neighborhoods where people are at elevated risk for cardiac arrest and to target these areas for educational outreach and other mitigation strategies. This paper describes an OHCA web mapping application that was developed to provide users with interactive maps and data for them to quickly visualize and analyze the geographic pattern of cardiac arrest rates, bystander CPR rates, and survival rates at the neighborhood level in different U.S. cities. The data comes from the CARES Registry and is provided over a period spanning several years so users can visualize trends in neighborhood out-of-hospital cardiac arrest patterns. Users can also visualize areas that are statistical hot and cold spots for cardiac arrest and compare OHCA and bystander CPR rates in the hot and cold spots. Although not designed as a public participation GIS (PPGIS), this application seeks to provide a forum around which data and maps about local patterns of OHCA can be shared, analyzed and discussed with a view of empowering local communities to take action to address the high rates of OHCA in their vicinity. PMID:23923097

  17. Software For Graphical Representation Of A Network

    NASA Technical Reports Server (NTRS)

    Mcallister, R. William; Mclellan, James P.

    1993-01-01

    System Visualization Tool (SVT) computer program developed to provide systems engineers with means of graphically representing networks. Generates diagrams illustrating structures and states of networks defined by users. Provides systems engineers powerful tool simplifing analysis of requirements and testing and maintenance of complex software-controlled systems. Employs visual models supporting analysis of chronological sequences of requirements, simulation data, and related software functions. Applied to pneumatic, hydraulic, and propellant-distribution networks. Used to define and view arbitrary configurations of such major hardware components of system as propellant tanks, valves, propellant lines, and engines. Also graphically displays status of each component. Advantage of SVT: utilizes visual cues to represent configuration of each component within network. Written in Turbo Pascal(R), version 5.0.

  18. MFV-class: a multi-faceted visualization tool of object classes.

    PubMed

    Zhang, Zhi-meng; Pan, Yun-he; Zhuang, Yue-ting

    2004-11-01

    Classes are key software components in an object-oriented software system. In many industrial OO software systems, there are some classes that have complicated structure and relationships. So in the processes of software maintenance, testing, software reengineering, software reuse and software restructure, it is a challenge for software engineers to understand these classes thoroughly. This paper proposes a class comprehension model based on constructivist learning theory, and implements a software visualization tool (MFV-Class) to help in the comprehension of a class. The tool provides multiple views of class to uncover manifold facets of class contents. It enables visualizing three object-oriented metrics of classes to help users focus on the understanding process. A case study was conducted to evaluate our approach and the toolkit.

  19. Object-oriented software design in semiautomatic building extraction

    NASA Astrophysics Data System (ADS)

    Guelch, Eberhard; Mueller, Hardo

    1997-08-01

    Developing a system for semiautomatic building acquisition is a complex process, that requires constant integration and updating of software modules and user interfaces. To facilitate these processes we apply an object-oriented design not only for the data but also for the software involved. We use the unified modeling language (UML) to describe the object-oriented modeling of the system in different levels of detail. We can distinguish between use cases from the users point of view, that represent a sequence of actions, yielding in an observable result and the use cases for the programmers, who can use the system as a class library to integrate the acquisition modules in their own software. The structure of the system is based on the model-view-controller (MVC) design pattern. An example from the integration of automated texture extraction for the visualization of results demonstrate the feasibility of this approach.

  20. Proteopedia: Exciting Advances in the 3D Encyclopedia of Biomolecular Structure

    NASA Astrophysics Data System (ADS)

    Prilusky, Jaime; Hodis, Eran; Sussman, Joel L.

    Proteopedia is a collaborative, 3D web-encyclopedia of protein, nucleic acid and other structures. Proteopedia ( http://www.proteopedia.org ) presents 3D biomolecule structures in a broadly accessible manner to a diverse scientific audience through easy-to-use molecular visualization tools integrated into a wiki environment that anyone with a user account can edit. We describe recent advances in the web resource in the areas of content and software. In terms of content, we describe a large growth in user-added content as well as improvements in automatically-generated content for all PDB entry pages in the resource. In terms of software, we describe new features ranging from the capability to create pages hidden from public view to the capability to export pages for offline viewing. New software features also include an improved file-handling system and availability of biological assemblies of protein structures alongside their asymmetric units.

  1. A tool for exploring space-time patterns: an animation user research.

    PubMed

    Ogao, Patrick J

    2006-08-29

    Ever since Dr. John Snow (1813-1854) used a case map to identify water well as the source of a cholera outbreak in London in the 1800s, the use of spatio-temporal maps have become vital tools in a wide range of disease mapping and control initiatives. The increasing use of spatio-temporal maps in these life-threatening sectors warrants that they are accurate, and easy to interpret to enable prompt decision making by health experts. Similar spatio-temporal maps are observed in urban growth and census mapping--all critical aspects a of a country's socio-economic development. In this paper, a user test research was carried out to determine the effectiveness of spatio-temporal maps (animation) in exploring geospatial structures encompassing disease, urban and census mapping. Three types of animation were used, namely; passive, interactive and inference-based animation, with the key differences between them being on the level of interactivity and complementary domain knowledge that each offers to the user. Passive animation maintains the view only status. The user has no control over its contents and dynamic variables. Interactive animation provides users with the basic media player controls, navigation and orientation tools. Inference-based animation incorporates these interactive capabilities together with a complementary automated intelligent view that alerts users to interesting patterns, trends or anomalies that may be inherent in the data sets. The test focussed on the role of animation passive and interactive capabilities in exploring space-time patterns by engaging test-subjects in thinking aloud evaluation protocol. The test subjects were selected from a geoinformatics (map reading, interpretation and analysis abilities) background. Every test-subject used each of the three types of animation and their performances for each session assessed. The results show that interactivity in animation is a preferred exploratory tool in identifying, interpreting and providing explanations about observed geospatial phenomena. Also, exploring geospatial data structures using animation is best achieved using provocative interactive tools such as was seen with the inference-based animation. The visual methods employed using the three types of animation are all related and together these patterns confirm the exploratory cognitive structure and processes for visualization tools. The generic types of animation as defined in this paper play a crucial role in facilitating the visualization of geospatial data. These animations can be created and their contents defined based on the user's presentational and exploratory needs. For highly explorative tasks, maintaining a link between the data sets and the animation is crucial to enabling a rich and effective knowledge discovery environment.

  2. A tool for exploring space-time patterns : an animation user research

    PubMed Central

    Ogao, Patrick J

    2006-01-01

    Background Ever since Dr. John Snow (1813–1854) used a case map to identify water well as the source of a cholera outbreak in London in the 1800s, the use of spatio-temporal maps have become vital tools in a wide range of disease mapping and control initiatives. The increasing use of spatio-temporal maps in these life-threatening sectors warrants that they are accurate, and easy to interpret to enable prompt decision making by health experts. Similar spatio-temporal maps are observed in urban growth and census mapping – all critical aspects a of a country's socio-economic development. In this paper, a user test research was carried out to determine the effectiveness of spatio-temporal maps (animation) in exploring geospatial structures encompassing disease, urban and census mapping. Results Three types of animation were used, namely; passive, interactive and inference-based animation, with the key differences between them being on the level of interactivity and complementary domain knowledge that each offers to the user. Passive animation maintains the view only status. The user has no control over its contents and dynamic variables. Interactive animation provides users with the basic media player controls, navigation and orientation tools. Inference-based animation incorporates these interactive capabilities together with a complementary automated intelligent view that alerts users to interesting patterns, trends or anomalies that may be inherent in the data sets. The test focussed on the role of animation passive and interactive capabilities in exploring space-time patterns by engaging test-subjects in thinking aloud evaluation protocol. The test subjects were selected from a geoinformatics (map reading, interpretation and analysis abilities) background. Every test-subject used each of the three types of animation and their performances for each session assessed. The results show that interactivity in animation is a preferred exploratory tool in identifying, interpreting and providing explanations about observed geospatial phenomena. Also, exploring geospatial data structures using animation is best achieved using provocative interactive tools such as was seen with the inference-based animation. The visual methods employed using the three types of animation are all related and together these patterns confirm the exploratory cognitive structure and processes for visualization tools. Conclusion The generic types of animation as defined in this paper play a crucial role in facilitating the visualization of geospatial data. These animations can be created and their contents defined based on the user's presentational and exploratory needs. For highly explorative tasks, maintaining a link between the data sets and the animation is crucial to enabling a rich and effective knowledge discovery environment. PMID:16938138

  3. Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View

    NASA Astrophysics Data System (ADS)

    Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.

    2017-09-01

    Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.

  4. Soldier-worn augmented reality system for tactical icon visualization

    NASA Astrophysics Data System (ADS)

    Roberts, David; Menozzi, Alberico; Clipp, Brian; Russler, Patrick; Cook, James; Karl, Robert; Wenger, Eric; Church, William; Mauger, Jennifer; Volpe, Chris; Argenta, Chris; Wille, Mark; Snarski, Stephen; Sherrill, Todd; Lupo, Jasper; Hobson, Ross; Frahm, Jan-Michael; Heinly, Jared

    2012-06-01

    This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive 'heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier's view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina - Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°×30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-on-Target) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.

  5. VisFlow - Web-based Visualization Framework for Tabular Data with a Subset Flow Model.

    PubMed

    Yu, Bowen; Silva, Claudio T

    2017-01-01

    Data flow systems allow the user to design a flow diagram that specifies the relations between system components which process, filter or visually present the data. Visualization systems may benefit from user-defined data flows as an analysis typically consists of rendering multiple plots on demand and performing different types of interactive queries across coordinated views. In this paper, we propose VisFlow, a web-based visualization framework for tabular data that employs a specific type of data flow model called the subset flow model. VisFlow focuses on interactive queries within the data flow, overcoming the limitation of interactivity from past computational data flow systems. In particular, VisFlow applies embedded visualizations and supports interactive selections, brushing and linking within a visualization-oriented data flow. The model requires all data transmitted by the flow to be a data item subset (i.e. groups of table rows) of some original input table, so that rendering properties can be assigned to the subset unambiguously for tracking and comparison. VisFlow features the analysis flexibility of a flow diagram, and at the same time reduces the diagram complexity and improves usability. We demonstrate the capability of VisFlow on two case studies with domain experts on real-world datasets showing that VisFlow is capable of accomplishing a considerable set of visualization and analysis tasks. The VisFlow system is available as open source on GitHub.

  6. Magnifying Smartphone Screen Using Google Glass for Low-Vision Users.

    PubMed

    Pundlik, Shrinivas; HuaQi Yi; Rui Liu; Peli, Eli; Gang Luo

    2017-01-01

    Magnification is a key accessibility feature used by low-vision smartphone users. However, small screen size can lead to loss of context and make interaction with magnified displays challenging. We hypothesize that controlling the viewport with head motion can be natural and help in gaining access to magnified displays. We implement this idea using a Google Glass that displays the magnified smartphone screenshots received in real time via Bluetooth. Instead of navigating with touch gestures on the magnified smartphone display, the users can view different screen locations by rotating their head, and remotely interacting with the smartphone. It is equivalent to looking at a large virtual image through a head contingent viewing port, in this case, the Glass display with ~ 15 ° field of view. The system can transfer seven screenshots per second at 8 × magnification, sufficient for tasks where the display content does not change rapidly. A pilot evaluation of this approach was conducted with eight normally sighted and four visually impaired subjects performing assigned tasks using calculator and music player apps. Results showed that performance in the calculation task was faster with the Glass than with the phone's built-in screen zoom. We conclude that head contingent scanning control can be beneficial in navigating magnified small smartphone displays, at least for tasks involving familiar content layout.

  7. Evaluation of a novel multi-articulated endoscope: proof of concept through a virtual simulation.

    PubMed

    Karvonen, Tuukka; Muranishi, Yusuke; Yamamoto, Goshiro; Kuroda, Tomohiro; Sato, Toshihiko

    2017-07-01

    In endoscopic surgery such as video-assisted thoracoscopic surgery and laparoscopic surgery, providing the surgeon a good view of the target is important. Rigid endoscope has for years been the go-to tool for this purpose, but it has certain limitations like the inability to work around obstacles. To improve on current tools, a novel multi-articulated endoscope (MAE) is currently under development. To investigate its feasibility and possible value, we performed a user test using virtual prototype of the MAE with the intent to show that it outperforms the conventional endoscope while bringing minimal additional burden to the operator. To evaluate the prototype, we built a virtual model of the MAE and a rigid oblique-viewing endoscope. Through a comparative user study we evaluate the ability of each device to visualize certain targets placed inside the virtual chest cavity by the angle between the visual axis of the scope and the normal of the plane of the target, while accounting for the usability of each endoscope by recording the time taken for each task. In addition, we collected a questionnaire from each participant to obtain feedback. The angles obtained using the MAE were smaller on average ([Formula: see text]), indicating that better visualization can be achieved through the proposed method. A nonsignificant difference in mean time taken for each task in favor of the rigid endoscope was also found ([Formula: see text]). We have demonstrated that better visualization for endoscopic surgery can be achieved through our novel MAE. The scope may bring about a paradigm shift in the field of minimally invasive surgery by providing more freedom in viewpoint selection, enabling surgeons to perform more elaborate procedures in minimally invasive settings.

  8. Video stereolization: combining motion analysis with user interaction.

    PubMed

    Liao, Miao; Gao, Jizhou; Yang, Ruigang; Gong, Minglun

    2012-07-01

    We present a semiautomatic system that converts conventional videos into stereoscopic videos by combining motion analysis with user interaction, aiming to transfer as much as possible labeling work from the user to the computer. In addition to the widely used structure from motion (SFM) techniques, we develop two new methods that analyze the optical flow to provide additional qualitative depth constraints. They remove the camera movement restriction imposed by SFM so that general motions can be used in scene depth estimation-the central problem in mono-to-stereo conversion. With these algorithms, the user's labeling task is significantly simplified. We further developed a quadratic programming approach to incorporate both quantitative depth and qualitative depth (such as these from user scribbling) to recover dense depth maps for all frames, from which stereoscopic view can be synthesized. In addition to visual results, we present user study results showing that our approach is more intuitive and less labor intensive, while producing 3D effect comparable to that from current state-of-the-art interactive algorithms.

  9. A visual identification key utilizing both gestalt and analytic approaches to identification of Carices present in North America (Plantae, Cyperaceae)

    PubMed Central

    2013-01-01

    Abstract Images are a critical part of the identification process because they enable direct, immediate and relatively unmediated comparisons between a specimen being identified and one or more reference specimens. The Carices Interactive Visual Identification Key (CIVIK) is a novel tool for identification of North American Carex species, the largest vascular plant genus in North America, and two less numerous closely-related genera, Cymophyllus and Kobresia. CIVIK incorporates 1288 high-resolution tiled image sets that allow users to zoom in to view minute structures that are crucial at times for identification in these genera. Morphological data are derived from the earlier Carex Interactive Identification Key (CIIK) which in turn used data from the Flora of North America treatments. In this new iteration, images can be viewed in a grid or histogram format, allowing multiple representations of data. In both formats the images are fully zoomable. PMID:24723777

  10. VAiRoma: A Visual Analytics System for Making Sense of Places, Times, and Events in Roman History.

    PubMed

    Cho, Isaac; Dou, Wewnen; Wang, Derek Xiaoyu; Sauda, Eric; Ribarsky, William

    2016-01-01

    Learning and gaining knowledge of Roman history is an area of interest for students and citizens at large. This is an example of a subject with great sweep (with many interrelated sub-topics over, in this case, a 3,000 year history) that is hard to grasp by any individual and, in its full detail, is not available as a coherent story. In this paper, we propose a visual analytics approach to construct a data driven view of Roman history based on a large collection of Wikipedia articles. Extracting and enabling the discovery of useful knowledge on events, places, times, and their connections from large amounts of textual data has always been a challenging task. To this aim, we introduce VAiRoma, a visual analytics system that couples state-of-the-art text analysis methods with an intuitive visual interface to help users make sense of events, places, times, and more importantly, the relationships between them. VAiRoma goes beyond textual content exploration, as it permits users to compare, make connections, and externalize the findings all within the visual interface. As a result, VAiRoma allows users to learn and create new knowledge regarding Roman history in an informed way. We evaluated VAiRoma with 16 participants through a user study, with the task being to learn about roman piazzas through finding relevant articles and new relationships. Our study results showed that the VAiRoma system enables the participants to find more relevant articles and connections compared to Web searches and literature search conducted in a roman library. Subjective feedback on VAiRoma was also very positive. In addition, we ran two case studies that demonstrate how VAiRoma can be used for deeper analysis, permitting the rapid discovery and analysis of a small number of key documents even when the original collection contains hundreds of thousands of documents.

  11. Visualization Development of the Ballistic Threat Geospatial Optimization

    DTIC Science & Technology

    2015-07-01

    topographic globes, Keyhole Markup Language (KML), and Collada files. World Wind gives the user the ability to import 3-D models and navigate...present. After the first person view window is closed , the images stored in memory are then converted to a QuickTime movie (.MOV). The video will be...processing unit HPC high-performance computing JOGL Java implementation of OpenGL KML Keyhole Markup Language NASA National Aeronautics and Space

  12. Social Image Tag Ranking by Two-View Learning

    NASA Astrophysics Data System (ADS)

    Zhuang, Jinfeng; Hoi, Steven C. H.

    Tags play a central role in text-based social image retrieval and browsing. However, the tags annotated by web users could be noisy, irrelevant, and often incomplete for describing the image contents, which may severely deteriorate the performance of text-based image retrieval models. In order to solve this problem, researchers have proposed techniques to rank the annotated tags of a social image according to their relevance to the visual content of the image. In this paper, we aim to overcome the challenge of social image tag ranking for a corpus of social images with rich user-generated tags by proposing a novel two-view learning approach. It can effectively exploit both textual and visual contents of social images to discover the complicated relationship between tags and images. Unlike the conventional learning approaches that usually assumes some parametric models, our method is completely data-driven and makes no assumption about the underlying models, making the proposed solution practically more effective. We formulate our method as an optimization task and present an efficient algorithm to solve it. To evaluate the efficacy of our method, we conducted an extensive set of experiments by applying our technique to both text-based social image retrieval and automatic image annotation tasks. Our empirical results showed that the proposed method can be more effective than the conventional approaches.

  13. Improving the User Experience of Finding and Visualizing Oceanographic Data

    NASA Astrophysics Data System (ADS)

    Rauch, S.; Allison, M. D.; Groman, R. C.; Chandler, C. L.; Galvarino, C.; Gegg, S. R.; Kinkade, D.; Shepherd, A.; Wiebe, P. H.; Glover, D. M.

    2013-12-01

    Searching for and locating data of interest can be a challenge to researchers as increasing volumes of data are made available online through various data centers, repositories, and archives. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is keenly aware of this challenge and, as a result, has implemented features and technologies aimed at improving data discovery and enhancing the user experience. BCO-DMO was created in 2006 to manage and publish data from research projects funded by the Division of Ocean Sciences (OCE) Biological and Chemical Oceanography Sections and the Division of Polar Programs (PLR) Antarctic Sciences Organisms and Ecosystems Program (ANT) of the US National Science Foundation (NSF). The BCO-DMO text-based and geospatial-based data access systems provide users with tools to search, filter, and visualize data in order to efficiently find data of interest. The geospatial interface, developed using a suite of open-source software (including MapServer [1], OpenLayers [2], ExtJS [3], and MySQL [4]), allows users to search and filter/subset metadata based on program, project, or deployment, or by using a simple word search. The map responds based on user selections, presents options that allow the user to choose specific data parameters (e.g., a species or an individual drifter), and presents further options for visualizing those data on the map or in "quick-view" plots. The data managed and made available by BCO-DMO are very heterogeneous in nature, from in-situ biogeochemical, ecological, and physical data, to controlled laboratory experiments. Due to the heterogeneity of the data types, a 'one size fits all' approach to visualization cannot be applied. Datasets are visualized in a way that will best allow users to assess fitness for purpose. An advanced geospatial interface, which contains a semantically-enabled faceted search [5], is also available. These search facets are highly interactive and responsive, allowing users to construct their own custom searches by applying multiple filters. New filtering and visualization tools are continually being added to the BCO-DMO system as new data types are encountered and as we receive feedback from our data contributors and users. As our system becomes more complex, teaching users about the many interactive features becomes increasingly important. Tutorials and videos are made available online. Recent in-person classroom-style tutorials have proven useful for both demonstrating our system to users and for obtaining feedback to further improve the user experience. References: [1] University of Minnesota. MapServer: Open source web mapping. http://www.mapserver.org [2] OpenLayers: Free Maps for the Web. http://www.openlayers.org [3] Sencha. ExtJS. http://www.sencha.com/products/extjs [4] MySQL. http://www.mysql.com/ [5] Maffei, A. R., Rozell, E. A., West, P., Zednik, S., and Fox, P. A. 2011. Open Standards and Technologies in the S2S Framework. Abstract IN31A-1435 presented at American Geophysical Union 2011 Fall Meeting, San Francisco, CA, 7 December 2011.

  14. PHREEQCI; a graphical user interface for the geochemical computer program PHREEQC

    USGS Publications Warehouse

    Charlton, Scott R.; Macklin, Clifford L.; Parkhurst, David L.

    1997-01-01

    PhreeqcI is a Windows-based graphical user interface for the geochemical computer program PHREEQC. PhreeqcI provides the capability to generate and edit input data files, run simulations, and view text files containing simulation results, all within the framework of a single interface. PHREEQC is a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Interactive access to all of the capabilities of PHREEQC is available with PhreeqcI. The interface is written in Visual Basic and will run on personal computers under the Windows(3.1), Windows95, and WindowsNT operating systems.

  15. Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality

    NASA Astrophysics Data System (ADS)

    Wimmer, Felix; Bichlmeier, Christoph; Heining, Sandro M.; Navab, Nassir

    The intent of medical Augmented Reality (AR) is to augment the surgeon's real view on the patient with the patient's interior anatomy resulting from a suitable visualization of medical imaging data. This paper presents a fast and user-defined clipping technique for medical AR allowing for cutting away any parts of the virtual anatomy and images of the real part of the AR scene hindering the surgeon's view onto the deepseated region of interest. Modeled on cut-away techniques from scientific illustrations and computer graphics, the method creates a fixed vision channel to the inside of the patient. It enables a clear view on the focussed virtual anatomy and moreover improves the perception of spatial depth.

  16. Visualization of small scale structures on high resolution DEMs

    NASA Astrophysics Data System (ADS)

    Kokalj, Žiga; Zakšek, Klemen; Pehani, Peter; Čotar, Klemen; Oštir, Krištof

    2015-04-01

    Knowledge on the terrain morphology is very important for observation of numerous processes and events and digital elevation models are therefore one of the most important datasets in geographic analyses. Furthermore, recognition of natural and anthropogenic microrelief structures, which can be observed on detailed terrain models derived from aerial laser scanning (lidar) or structure-from-motion photogrammetry, is of paramount importance in many applications. In this paper we thus examine and evaluate methods of raster lidar data visualization for the determination (recognition) of microrelief features and present a series of strategies to assist selecting the preferred visualization of choice for structures of various shapes and sizes, set in varied landscapes. Often the answer is not definite and more frequently a combination of techniques has to be used to map a very diverse landscape. Researchers can only very recently benefit from free software for calculation of advanced visualization techniques. These tools are often difficult to understand, have numerous options that confuse the user, or require and produce non-standard data formats, because they were written for specific purposes. We therefore designed the Relief Visualization Toolbox (RVT) as a free, easy-to-use, standalone application to create visualisations from high-resolution digital elevation data. It is tailored for the very beginners in relief interpretation, but it can also be used by more advanced users in data processing and geographic information systems. It offers a range of techniques, such as simple hillshading and its derivatives, slope gradient, trend removal, positive and negative openness, sky-view factor, and anisotropic sky-view factor. All included methods have been proven to be effective for detection of small scale features and the default settings are optimised to accomplish this task. However, the usability of the tool goes beyond computation for visualization purposes, as sky-view factor, for example, is an essential variable in many fields, e.g. in meteorology. RVT produces two types of results: 1) the original files have a full range of values and are intended for further analyses in geographic information systems, 2) the simplified versions are histogram stretched for visualization purposes and saved as 8-bit GeoTIFF files. This means that they can be explored in non-GIS software, e.g. with simple picture viewers, which is essential when a larger community of non-specialists needs to be considered, e.g. in public collaborative projects. The tool recognizes all frequently used single band raster formats and supports elevation raster file data conversion.

  17. Pre-Occupancy Evaluation of Patient Satisfaction in Hospitals.

    PubMed

    van der Zwart, Johan; van der Voordt, Theo J M

    2015-01-01

    To explore analytical drawing techniques as a means to assess the attainment of preset objectives in the design phase of hospital buildings and to test ex ante if the building fits with these objectives, with a focus on view on nature, wayfinding, daylight, visibility of patient areas from reception desks, privacy, and communication between medical staff and patients, and noise reduction. The impact of the build environment on user value is at the core of evidence-based design, but these values are normally only experienced by users after the building is constructed. Therefore, assessment of these values during the design phase could improve the outcome for patients. An analysis of available assessment tools showed that research by drawing and the use of space syntax methods is an adequate means to visualize the strengths and weaknesses of floor plans in relation to spatial user experience. This approach is illustrated by an assessment of a nursing ward of the Deventer hospital in the Netherlands. Floor plan analysis by using space syntax techniques makes it possible to visualize various aspects of user value and supports the incorporation of usability issues in the discussion between the designer, the client, and the users during the design process. It is recommended to test the findings of the design assessment by a post-occupancy evaluation of the building-in-use and to conduct similar studies in other hospitals, as a means to build a body of knowledge for user-oriented design and management of hospital buildings. © The Author(s) 2015.

  18. Graphical user interface concepts for tactical augmented reality

    NASA Astrophysics Data System (ADS)

    Argenta, Chris; Murphy, Anne; Hinton, Jeremy; Cook, James; Sherrill, Todd; Snarski, Steve

    2010-04-01

    Applied Research Associates and BAE Systems are working together to develop a wearable augmented reality system under the DARPA ULTRA-Vis program†. Our approach to achieve the objectives of ULTRAVis, called iLeader, incorporates a full color 40° field of view (FOV) see-thru holographic waveguide integrated with sensors for full position and head tracking to provide an unobtrusive information system for operational maneuvers. iLeader will enable warfighters to mark-up the 3D battle-space with symbologic identification of graphical control measures, friendly force positions and enemy/target locations. Our augmented reality display provides dynamic real-time painting of symbols on real objects, a pose-sensitive 360° representation of relevant object positions, and visual feedback for a variety of system activities. The iLeader user interface and situational awareness graphical representations are highly intuitive, nondisruptive, and always tactically relevant. We used best human-factors practices, system engineering expertise, and cognitive task analysis to design effective strategies for presenting real-time situational awareness to the military user without distorting their natural senses and perception. We present requirements identified for presenting information within a see-through display in combat environments, challenges in designing suitable visualization capabilities, and solutions that enable us to bring real-time iconic command and control to the tactical user community.

  19. Visualizing Human Migration Trhough Space and Time

    NASA Astrophysics Data System (ADS)

    Zambotti, G.; Guan, W.; Gest, J.

    2015-07-01

    Human migration has been an important activity in human societies since antiquity. Since 1890, approximately three percent of the world's population has lived outside of their country of origin. As globalization intensifies in the modern era, human migration persists even as governments seek to more stringently regulate flows. Understanding this phenomenon, its causes, processes and impacts often starts from measuring and visualizing its spatiotemporal patterns. This study builds a generic online platform for users to interactively visualize human migration through space and time. This entails quickly ingesting human migration data in plain text or tabular format; matching the records with pre-established geographic features such as administrative polygons; symbolizing the migration flow by circular arcs of varying color and weight based on the flow attributes; connecting the centroids of the origin and destination polygons; and allowing the user to select either an origin or a destination feature to display all flows in or out of that feature through time. The method was first developed using ArcGIS Server for world-wide cross-country migration, and later applied to visualizing domestic migration patterns within China between provinces, and between states in the United States, all through multiple years. The technical challenges of this study include simplifying the shapes of features to enhance user interaction, rendering performance and application scalability; enabling the temporal renderers to provide time-based rendering of features and the flow among them; and developing a responsive web design (RWD) application to provide an optimal viewing experience. The platform is available online for the public to use, and the methodology is easily adoptable to visualizing any flow, not only human migration but also the flow of goods, capital, disease, ideology, etc., between multiple origins and destinations across space and time.

  20. The Role of Direct and Visual Force Feedback in Suturing Using a 7-DOF Dual-Arm Teleoperated System.

    PubMed

    Talasaz, Ali; Trejos, Ana Luisa; Patel, Rajni V

    2017-01-01

    The lack of haptic feedback in robotics-assisted surgery can result in tissue damage or accidental tool-tissue hits. This paper focuses on exploring the effect of haptic feedback via direct force reflection and visual presentation of force magnitudes on performance during suturing in robotics-assisted minimally invasive surgery (RAMIS). For this purpose, a haptics-enabled dual-arm master-slave teleoperation system capable of measuring tool-tissue interaction forces in all seven Degrees-of-Freedom (DOFs) was used. Two suturing tasks, tissue puncturing and knot-tightening, were chosen to assess user skills when suturing on phantom tissue. Sixteen subjects participated in the trials and their performance was evaluated from various points of view: force consistency, number of accidental hits with tissue, amount of tissue damage, quality of the suture knot, and the time required to accomplish the task. According to the results, visual force feedback was not very useful during the tissue puncturing task as different users needed different amounts of force depending on the penetration of the needle into the tissue. Direct force feedback, however, was more useful for this task to apply less force and to minimize the amount of damage to the tissue. Statistical results also reveal that both visual and direct force feedback were required for effective knot tightening: direct force feedback could reduce the number of accidental hits with the tissue and also the amount of tissue damage, while visual force feedback could help to securely tighten the suture knots and maintain force consistency among different trials/users. These results provide evidence of the importance of 7-DOF force reflection when performing complex tasks in a RAMIS setting.

  1. Visual task performance using a monocular see-through head-mounted display (HMD) while walking.

    PubMed

    Mustonen, Terhi; Berg, Mikko; Kaistinen, Jyrki; Kawai, Takashi; Häkkinen, Jukka

    2013-12-01

    A monocular see-through head-mounted display (HMD) allows the user to view displayed information while simultaneously interacting with the surrounding environment. This configuration lets people use HMDs while they are moving, such as while walking. However, sharing attention between the display and environment can compromise a person's performance in any ongoing task, and controlling one's gait may add further challenges. In this study, the authors investigated how the requirements of HMD-administered visual tasks altered users' performance while they were walking. Twenty-four university students completed 3 cognitive tasks (high- and low-working memory load, visual vigilance) on an HMD while seated and while simultaneously performing a paced walking task in a controlled environment. The results show that paced walking worsened performance (d', reaction time) in all HMD-administered tasks, but visual vigilance deteriorated more than memory performance. The HMD-administered tasks also worsened walking performance (speed, path overruns) in a manner that varied according to the overall demands of the task. These results suggest that people's ability to process information displayed on an HMD may worsen while they are in motion. Furthermore, the use of an HMD can critically alter a person's natural performance, such as their ability to guide and control their gait. In particular, visual tasks that involve constant monitoring of the HMD should be avoided. These findings highlight the need for careful consideration of the type and difficulty of information that can be presented through HMDs while still letting the user achieve an acceptable overall level of performance in various contexts of use. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. Power mobility with collision avoidance for older adults: user, caregiver, and prescriber perspectives.

    PubMed

    Wang, Rosalie H; Korotchenko, Alexandra; Hurd Clarke, Laura; Mortenson, W Ben; Mihailidis, Alex

    2013-01-01

    Collision avoidance technology has the capacity to facilitate safer mobility among older power mobility users with physical, sensory, and cognitive impairments, thus enabling independence for more users. Little is known about consumers' perceptions of collision avoidance. This article draws on interviews (29 users, 5 caregivers, and 10 prescribers) to examine views on design and utilization of this technology. Data analysis identified three themes: "useful situations or contexts," "technology design issues and real-life application," and "appropriateness of collision avoidance technology for a variety of users." Findings support ongoing development of collision avoidance for older adult users. The majority of participants supported the technology and felt that it might benefit current users and users with visual impairments, but might be unsuitable for people with significant cognitive impairments. Some participants voiced concerns regarding the risk for injury with power mobility use and some identified situations where collision avoidance might be beneficial (driving backward, avoiding dynamic obstacles, negotiating outdoor barriers, and learning power mobility use). Design issues include the need for context awareness, reliability, and user interface specifications. User desire to maintain driving autonomy supports development of collaboratively controlled systems. This research lays the groundwork for future development by illustrating consumer requirements for this technology.

  3. The LandCarbon Web Application: Advanced Geospatial Data Delivery and Visualization Tools for Communication about Ecosystem Carbon Sequestration and Greenhouse Gas Fluxes

    NASA Astrophysics Data System (ADS)

    Thomas, N.; Galey, B.; Zhu, Z.; Sleeter, B. M.; Lehmer, E.

    2015-12-01

    The LandCarbon web application (http://landcarbon.org) is a collaboration between the U.S. Geological Survey and U.C. Berkeley's Geospatial Innovation Facility (GIF). The LandCarbon project is a national assessment focused on improved understanding of carbon sequestration and greenhouse gas fluxes in and out of ecosystems related to land use, using scientific capabilities from USGS and other organizations. The national assessment is conducted at a regional scale, covers all 50 states, and incorporates data from remote sensing, land change studies, aquatic and wetland data, hydrological and biogeochemical modeling, and wildfire mapping to estimate baseline and future potential carbon storage and greenhouse gas fluxes. The LandCarbon web application is a geospatial portal that allows for a sophisticated data delivery system as well as a suite of engaging tools that showcase the LandCarbon data using interactive web based maps and charts. The web application was designed to be flexible and accessible to meet the needs of a variety of users. Casual users can explore the input data and results of the assessment for a particular area of interest in an intuitive and interactive map, without the need for specialized software. Users can view and interact with maps, charts, and statistics that summarize the baseline and future potential carbon storage and fluxes for U.S. Level 2 Ecoregions for 3 IPCC emissions scenarios. The application allows users to access the primary data sources and assessment results for viewing and download, and also to learn more about the assessment's objectives, methods, and uncertainties through published reports and documentation. The LandCarbon web application is built on free and open source libraries including Django and D3. The GIF has developed the Django-Spillway package, which facilitates interactive visualization and serialization of complex geospatial raster data. The underlying LandCarbon data is available through an open application programming interface (API), which will allow other organizations to build their own custom applications and tools. New features such as finer scale aggregation and an online carbon calculator are being added to the LandCarbon web application to continue to make the site interactive, visually compelling, and useful for a wide range of users.

  4. Statistical modeling for visualization evaluation through data fusion.

    PubMed

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Intuitive tactile zooming for graphics accessed by individuals who are blind and visually impaired.

    PubMed

    Rastogi, Ravi; Pawluk, T V Dianne; Ketchum, Jessica

    2013-07-01

    One possibility of providing access to visual graphics for those who are visually impaired is to present them tactually: unfortunately, details easily available to vision need to be magnified to be accessible through touch. For this, we propose an "intuitive" zooming algorithm to solve potential problems with directly applying visual zooming techniques to haptic displays that sense the current location of a user on a virtual diagram with a position sensor and, then, provide the appropriate local information either through force or tactile feedback. Our technique works by determining and then traversing the levels of an object tree hierarchy of a diagram. In this manner, the zoom steps adjust to the content to be viewed, avoid clipping and do not zoom when no object is present. The algorithm was tested using a small, "mouse-like" display with tactile feedback on pictures representing houses in a community and boats on a lake. We asked the users to answer questions related to details in the pictures. Comparing our technique to linear and logarithmic step zooming, we found a significant increase in the correctness of the responses (odds ratios of 2.64:1 and 2.31:1, respectively) and usability (differences of 36% and 19%, respectively) using our "intuitive" zooming technique.

  6. Development of interactive graphic user interfaces for modeling reaction-based biogeochemical processes in batch systems with BIOGEOCHEM

    NASA Astrophysics Data System (ADS)

    Chang, C.; Li, M.; Yeh, G.

    2010-12-01

    The BIOGEOCHEM numerical model (Yeh and Fang, 2002; Fang et al., 2003) was developed with FORTRAN for simulating reaction-based geochemical and biochemical processes with mixed equilibrium and kinetic reactions in batch systems. A complete suite of reactions including aqueous complexation, adsorption/desorption, ion-exchange, redox, precipitation/dissolution, acid-base reactions, and microbial mediated reactions were embodied in this unique modeling tool. Any reaction can be treated as fast/equilibrium or slow/kinetic reaction. An equilibrium reaction is modeled with an implicit finite rate governed by a mass action equilibrium equation or by a user-specified algebraic equation. A kinetic reaction is modeled with an explicit finite rate with an elementary rate, microbial mediated enzymatic kinetics, or a user-specified rate equation. None of the existing models has encompassed this wide array of scopes. To ease the input/output learning curve using the unique feature of BIOGEOCHEM, an interactive graphic user interface was developed with the Microsoft Visual Studio and .Net tools. Several user-friendly features, such as pop-up help windows, typo warning messages, and on-screen input hints, were implemented, which are robust. All input data can be real-time viewed and automated to conform with the input file format of BIOGEOCHEM. A post-processor for graphic visualizations of simulated results was also embedded for immediate demonstrations. By following data input windows step by step, errorless BIOGEOCHEM input files can be created even if users have little prior experiences in FORTRAN. With this user-friendly interface, the time effort to conduct simulations with BIOGEOCHEM can be greatly reduced.

  7. Slycat™ User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crossno, Patricia J.; Gittinger, Jaxon; Hunt, Warren L.

    Slycat™ is a web-based system for performing data analysis and visualization of potentially large quantities of remote, high-dimensional data. Slycat™ specializes in working with ensemble data. An ensemble is a group of related data sets, which typically consists of a set of simulation runs exploring the same problem space. An ensemble can be thought of as a set of samples within a multi-variate domain, where each sample is a vector whose value defines a point in high-dimensional space. To understand and describe the underlying problem being modeled in the simulations, ensemble analysis looks for shared behaviors and common features acrossmore » the group of runs. Additionally, ensemble analysis tries to quantify differences found in any members that deviate from the rest of the group. The Slycat™ system integrates data management, scalable analysis, and visualization. Results are viewed remotely on a user’s desktop via commodity web clients using a multi-tiered hierarchy of computation and data storage, as shown in Figure 1. Our goal is to operate on data as close to the source as possible, thereby reducing time and storage costs associated with data movement. Consequently, we are working to develop parallel analysis capabilities that operate on High Performance Computing (HPC) platforms, to explore approaches for reducing data size, and to implement strategies for staging computation across the Slycat™ hierarchy. Within Slycat™, data and visual analysis are organized around projects, which are shared by a project team. Project members are explicitly added, each with a designated set of permissions. Although users sign-in to access Slycat™, individual accounts are not maintained. Instead, authentication is used to determine project access. Within projects, Slycat™ models capture analysis results and enable data exploration through various visual representations. Although for scientists each simulation run is a model of real-world phenomena given certain conditions, we use the term model to refer to our modeling of the ensemble data, not the physics. Different model types often provide complementary perspectives on data features when analyzing the same data set. Each model visualizes data at several levels of abstraction, allowing the user to range from viewing the ensemble holistically to accessing numeric parameter values for a single run. Bookmarks provide a mechanism for sharing results, enabling interesting model states to be labeled and saved.« less

  8. The effect of four user interface concepts on visual scan pattern similarity and information foraging in a complex decision making task.

    PubMed

    Starke, Sandra D; Baber, Chris

    2018-07-01

    User interface (UI) design can affect the quality of decision making, where decisions based on digitally presented content are commonly informed by visually sampling information through eye movements. Analysis of the resulting scan patterns - the order in which people visually attend to different regions of interest (ROIs) - gives an insight into information foraging strategies. In this study, we quantified scan pattern characteristics for participants engaging with conceptually different user interface designs. Four interfaces were modified along two dimensions relating to effort in accessing information: data presentation (either alpha-numerical data or colour blocks), and information access time (all information sources readily available or sequential revealing of information required). The aim of the study was to investigate whether a) people develop repeatable scan patterns and b) different UI concepts affect information foraging and task performance. Thirty-two participants (eight for each UI concept) were given the task to correctly classify 100 credit card transactions as normal or fraudulent based on nine transaction attributes. Attributes varied in their usefulness of predicting the correct outcome. Conventional and more recent (network analysis- and bioinformatics-based) eye tracking metrics were used to quantify visual search. Empirical findings were evaluated in context of random data and possible accuracy for theoretical decision making strategies. Results showed short repeating sequence fragments within longer scan patterns across participants and conditions, comprising a systematic and a random search component. The UI design concept showing alpha-numerical data in full view resulted in most complete data foraging, while the design concept showing colour blocks in full view resulted in the fastest task completion time. Decision accuracy was not significantly affected by UI design. Theoretical calculations showed that the difference in achievable accuracy between very complex and simple decision making strategies was small. We conclude that goal-directed search of familiar information results in repeatable scan pattern fragments (often corresponding to information sources considered particularly important), but no repeatable complete scan pattern. The underlying concept of the UI affects how visual search is performed, and a decision making strategy develops. This should be taken in consideration when designing for applied domains. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. VisIVO: A Library and Integrated Tools for Large Astrophysical Dataset Exploration

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Costa, A.; Ersotelos, N.; Krokos, M.; Massimino, P.; Petta, C.; Vitello, F.

    2012-09-01

    VisIVO provides an integrated suite of tools and services that can be used in many scientific fields. VisIVO development starts in the Virtual Observatory framework. VisIVO allows users to visualize meaningfully highly-complex, large-scale datasets and create movies of these visualizations based on distributed infrastructures. VisIVO supports high-performance, multi-dimensional visualization of large-scale astrophysical datasets. Users can rapidly obtain meaningful visualizations while preserving full and intuitive control of the relevant parameters. VisIVO consists of VisIVO Desktop - a stand-alone application for interactive visualization on standard PCs, VisIVO Server - a platform for high performance visualization, VisIVO Web - a custom designed web portal, VisIVOSmartphone - an application to exploit the VisIVO Server functionality and the latest VisIVO features: VisIVO Library allows a job running on a computational system (grid, HPC, etc.) to produce movies directly with the code internal data arrays without the need to produce intermediate files. This is particularly important when running on large computational facilities, where the user wants to have a look at the results during the data production phase. For example, in grid computing facilities, images can be produced directly in the grid catalogue while the user code is running in a system that cannot be directly accessed by the user (a worker node). The deployment of VisIVO on the DG and gLite is carried out with the support of EDGI and EGI-Inspire projects. Depending on the structure and size of datasets under consideration, the data exploration process could take several hours of CPU for creating customized views and the production of movies could potentially last several days. For this reason an MPI parallel version of VisIVO could play a fundamental role in increasing performance, e.g. it could be automatically deployed on nodes that are MPI aware. A central concept in our development is thus to produce unified code that can run either on serial nodes or in parallel by using HPC oriented grid nodes. Another important aspect, to obtain as high performance as possible, is the integration of VisIVO processes with grid nodes where GPUs are available. We have selected CUDA for implementing a range of computationally heavy modules. VisIVO is supported by EGI-Inspire, EDGI and SCI-BUS projects.

  10. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  11. Diabetes Interactive Atlas

    PubMed Central

    Burrows, Nilka R.; Geiss, Linda S.

    2014-01-01

    The Diabetes Interactive Atlas is a recently released Web-based collection of maps that allows users to view geographic patterns and examine trends in diabetes and its risk factors over time across the United States and within states. The atlas provides maps, tables, graphs, and motion charts that depict national, state, and county data. Large amounts of data can be viewed in various ways simultaneously. In this article, we describe the design and technical issues for developing the atlas and provide an overview of the atlas’ maps and graphs. The Diabetes Interactive Atlas improves visualization of geographic patterns, highlights observation of trends, and demonstrates the concomitant geographic and temporal growth of diabetes and obesity. PMID:24503340

  12. Real-Time Multimission Event Notification System for Mars Relay

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Wang, Paul; Hy, Franklin H.

    2013-01-01

    As the Mars Relay Network is in constant flux (missions and teams going through their daily workflow), it is imperative that users are aware of such state changes. For example, a change by an orbiter team can affect operations on a lander team. This software provides an ambient view of the real-time status of the Mars network. The Mars Relay Operations Service (MaROS) comprises a number of tools to coordinate, plan, and visualize various aspects of the Mars Relay Network. As part of MaROS, a feature set was developed that operates on several levels of the software architecture. These levels include a Web-based user interface, a back-end "ReSTlet" built in Java, and databases that store the data as it is received from the network. The result is a real-time event notification and management system, so mission teams can track and act upon events on a moment-by-moment basis. This software retrieves events from MaROS and displays them to the end user. Updates happen in real time, i.e., messages are pushed to the user while logged into the system, and queued when the user is not online for later viewing. The software does not do away with the email notifications, but augments them with in-line notifications. Further, this software expands the events that can generate a notification, and allows user-generated notifications. Existing software sends a smaller subset of mission-generated notifications via email. A common complaint of users was that the system-generated e-mails often "get lost" with other e-mail that comes in. This software allows for an expanded set (including user-generated) of notifications displayed in-line of the program. By separating notifications, this can improve a user's workflow.

  13. Interactive Visualization of Infrared Spectral Data: Synergy of Computation, Visualization, and Experiment for Learning Spectroscopy

    NASA Astrophysics Data System (ADS)

    Lahti, Paul M.; Motyka, Eric J.; Lancashire, Robert J.

    2000-05-01

    A straightforward procedure is described to combine computation of molecular vibrational modes using commonly available molecular modeling programs with visualization of the modes using advanced features of the MDL Information Systems Inc. Chime World Wide Web browser plug-in. Minor editing of experimental spectra that are stored in the JCAMP-DX format allows linkage of IR spectral frequency ranges to Chime molecular display windows. The spectra and animation files can be combined by Hypertext Markup Language programming to allow interactive linkage between experimental spectra and computationally generated vibrational displays. Both the spectra and the molecular displays can be interactively manipulated to allow the user maximum control of the objects being viewed. This procedure should be very valuable not only for aiding students through visual linkage of spectra and various vibrational animations, but also by assisting them in learning the advantages and limitations of computational chemistry by comparison to experiment.

  14. Scribl: an HTML5 Canvas-based graphics library for visualizing genomic data over the web.

    PubMed

    Miller, Chase A; Anthony, Jon; Meyer, Michelle M; Marth, Gabor

    2013-02-01

    High-throughput biological research requires simultaneous visualization as well as analysis of genomic data, e.g. read alignments, variant calls and genomic annotations. Traditionally, such integrative analysis required desktop applications operating on locally stored data. Many current terabyte-size datasets generated by large public consortia projects, however, are already only feasibly stored at specialist genome analysis centers. As even small laboratories can afford very large datasets, local storage and analysis are becoming increasingly limiting, and it is likely that most such datasets will soon be stored remotely, e.g. in the cloud. These developments will require web-based tools that enable users to access, analyze and view vast remotely stored data with a level of sophistication and interactivity that approximates desktop applications. As rapidly dropping cost enables researchers to collect data intended to answer questions in very specialized contexts, developers must also provide software libraries that empower users to implement customized data analyses and data views for their particular application. Such specialized, yet lightweight, applications would empower scientists to better answer specific biological questions than possible with general-purpose genome browsers currently available. Using recent advances in core web technologies (HTML5), we developed Scribl, a flexible genomic visualization library specifically targeting coordinate-based data such as genomic features, DNA sequence and genetic variants. Scribl simplifies the development of sophisticated web-based graphical tools that approach the dynamism and interactivity of desktop applications. Software is freely available online at http://chmille4.github.com/Scribl/ and is implemented in JavaScript with all modern browsers supported.

  15. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    PubMed Central

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition- and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity. PMID:23366954

  16. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    PubMed

    Cowley, Benjamin R; Kaufman, Matthew T; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition-and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity.

  17. Interaction Junk: User Interaction-Based Evaluation of Visual Analytic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; North, Chris

    2012-10-14

    With the growing need for visualization to aid users in understanding large, complex datasets, the ability for users to interact and explore these datasets is critical. As visual analytic systems have advanced to leverage powerful computational models and data analytics capabilities, the modes by which users engage and interact with the information are limited. Often, users are taxed with directly manipulating parameters of these models through traditional GUIs (e.g., using sliders to directly manipulate the value of a parameter). However, the purpose of user interaction in visual analytic systems is to enable visual data exploration – where users can focusmore » on their task, as opposed to the tool or system. As a result, users can engage freely in data exploration and decision-making, for the purpose of gaining insight. In this position paper, we discuss how evaluating visual analytic systems can be approached through user interaction analysis, where the goal is to minimize the cognitive translation between the visual metaphor and the mode of interaction (i.e., reducing the “Interactionjunk”). We motivate this concept through a discussion of traditional GUIs used in visual analytics for direct manipulation of model parameters, and the importance of designing interactions the support visual data exploration.« less

  18. Virtual integral holography

    NASA Astrophysics Data System (ADS)

    Venolia, Dan S.; Williams, Lance

    1990-08-01

    A range of stereoscopic display technologies exist which are no more intrusive, to the user, than a pair of spectacles. Combining such a display system with sensors for the position and orientation of the user's point-of-view results in a greatly enhanced depiction of three-dimensional data. As the point of view changes, the stereo display channels are updated in real time. The face of a monitor or display screen becomes a window on a three-dimensional scene. Motion parallax naturally conveys the placement and relative depth of objects in the field of view. Most of the advantages of "head-mounted display" technology are achieved with a less cumbersome system. To derive the full benefits of stereo combined with motion parallax, both stereo channels must be updated in real time. This may limit the size and complexity of data bases which can be viewed on processors of modest resources, and restrict the use of additional three-dimensional cues, such as texture mapping, depth cueing, and hidden surface elimination. Effective use of "full 3D" may still be undertaken in a non-interactive mode. Integral composite holograms have often been advanced as a powerful 3D visualization tool. Such a hologram is typically produced from a film recording of an object on a turntable, or a computer animation of an object rotating about one axis. The individual frames of film are multiplexed, in a composite hologram, in such a way as to be indexed by viewing angle. The composite may be produced as a cylinder transparency, which provides a stereo view of the object as if enclosed within the cylinder, which can be viewed from any angle. No vertical parallax is usually provided (this would require increasing the dimensionality of the multiplexing scheme), but the three dimensional image is highly resolved and easy to view and interpret. Even a modest processor can duplicate the effect of such a precomputed display, provided sufficient memory and bus bandwidth. This paper describes the components of a stereo display system with user point-of-view tracking for interactive 3D, and a digital realization of integral composite display which we term virtual integral holography. The primary drawbacks of holographic display - film processing turnaround time, and the difficulties of displaying scenes in full color -are obviated, and motion parallax cues provide easy 3D interpretation even for users who cannot see in stereo.

  19. Design of smart home sensor visualizations for older adults.

    PubMed

    Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George

    2014-01-01

    Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date. Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.

  20. Design of smart home sensor visualizations for older adults.

    PubMed

    Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George

    2014-07-24

    Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date.CONCLUSIONS: Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.

  1. Immunogenetic Management Software: a new tool for visualization and analysis of complex immunogenetic datasets

    PubMed Central

    Johnson, Z. P.; Eady, R. D.; Ahmad, S. F.; Agravat, S.; Morris, T; Else, J; Lank, S. M.; Wiseman, R. W.; O’Connor, D. H.; Penedo, M. C. T.; Larsen, C. P.

    2012-01-01

    Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permitsmultiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox onWindows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie. kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo, user name: imsdemo7@gmail.com and password: imsdemo. PMID:22080300

  2. Immunogenetic Management Software: a new tool for visualization and analysis of complex immunogenetic datasets.

    PubMed

    Johnson, Z P; Eady, R D; Ahmad, S F; Agravat, S; Morris, T; Else, J; Lank, S M; Wiseman, R W; O'Connor, D H; Penedo, M C T; Larsen, C P; Kean, L S

    2012-04-01

    Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permits multiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox on Windows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie.kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo , user name: imsdemo7@gmail.com and password: imsdemo.

  3. Satellite Data Processing System (SDPS) users manual V1.0

    NASA Technical Reports Server (NTRS)

    Caruso, Michael; Dunn, Chris

    1989-01-01

    SDPS is a menu driven interactive program designed to facilitate the display and output of image and line-based data sets common to telemetry, modeling and remote sensing. This program can be used to display up to four separate raster images and overlay line-based data such as coastlines, ship tracks and velocity vectors. The program uses multiple windows to communicate information with the user. At any given time, the program may have up to four image display windows as well as auxiliary windows containing information about each image displayed. SDPS is not a commercial program. It does not contain complete type checking or error diagnostics which may allow the program to crash. Known anomalies will be mentioned in the appropriate section as notes or cautions. SDPS was designed to be used on Sun Microsystems Workstations running SunView1 (Sun Visual/Integrated Environment for Workstations). It was primarily designed to be used on workstations equipped with color monitors, but most of the line-based functions and several of the raster-based functions can be used with monochrome monitors. The program currently runs on Sun 3 series workstations running Sun OS 4.0 and should port easily to Sun 4 and Sun 386 series workstations with SunView1. Users should also be familiar with UNIX, Sun workstations and the SunView window system.

  4. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2014-04-29

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  5. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA

    2011-02-01

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  6. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA

    2012-03-20

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  7. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1994-01-01

    Envision is an interactive environment that provides researchers in the earth sciences convenient ways to manage, browse, and visualize large observed or model data sets. Its main features are support for the netCDF and HDF file formats, an easy to use X/Motif user interface, a client-server configuration, and portability to many UNIX workstations. The Envision package also provides new ways to view and change metadata in a set of data files. It permits a scientist to conveniently and efficiently manage large data sets consisting of many data files. It also provides links to popular visualization tools so that data can be quickly browsed. Envision is a public domain package, freely available to the scientific community. Envision software (binaries and source code) and documentation can be obtained from either of these servers: ftp://vista.atmos.uiuc.edu/pub/envision/ and ftp://csrp.tamu.edu/pub/envision/. Detailed descriptions of Envision capabilities and operations can be found in the User's Guide and Reference Manuals distributed with Envision software.

  8. Interactive visual exploration and analysis of origin-destination data

    NASA Astrophysics Data System (ADS)

    Ding, Linfang; Meng, Liqiu; Yang, Jian; Krisp, Jukka M.

    2018-05-01

    In this paper, we propose a visual analytics approach for the exploration of spatiotemporal interaction patterns of massive origin-destination data. Firstly, we visually query the movement database for data at certain time windows. Secondly, we conduct interactive clustering to allow the users to select input variables/features (e.g., origins, destinations, distance, and duration) and to adjust clustering parameters (e.g. distance threshold). The agglomerative hierarchical clustering method is applied for the multivariate clustering of the origin-destination data. Thirdly, we design a parallel coordinates plot for visualizing the precomputed clusters and for further exploration of interesting clusters. Finally, we propose a gradient line rendering technique to show the spatial and directional distribution of origin-destination clusters on a map view. We implement the visual analytics approach in a web-based interactive environment and apply it to real-world floating car data from Shanghai. The experiment results show the origin/destination hotspots and their spatial interaction patterns. They also demonstrate the effectiveness of our proposed approach.

  9. EverVIEW: a visualization platform for hydrologic and Earth science gridded data

    USGS Publications Warehouse

    Romañach, Stephanie S.; McKelvy, James M.; Suir, Kevin J.; Conzelmann, Craig

    2015-01-01

    The EverVIEW Data Viewer is a cross-platform desktop application that combines and builds upon multiple open source libraries to help users to explore spatially-explicit gridded data stored in Network Common Data Form (NetCDF). Datasets are displayed across multiple side-by-side geographic or tabular displays, showing colorized overlays on an Earth globe or grid cell values, respectively. Time-series datasets can be animated to see how water surface elevation changes through time or how habitat suitability for a particular species might change over time under a given scenario. Initially targeted toward Florida's Everglades restoration planning, EverVIEW has been flexible enough to address the varied needs of large-scale planning beyond Florida, and is currently being used in biological planning efforts nationally and internationally.

  10. Biplane reconstruction and visualization of virtual endoscopic and fluoroscopic views for interventional device navigation

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.

    2016-03-01

    Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.

  11. A New Architecture for Visualization: Open Mission Control Technologies

    NASA Technical Reports Server (NTRS)

    Trimble, Jay

    2017-01-01

    Open Mission Control Technologies (MCT) is a new architecture for visualisation of mission data. Driven by requirements for new mission capabilities, including distributed mission operations, access to data anywhere, customization by users, synthesis of multiple data sources, and flexibility for multi-mission adaptation, Open MCT provides users with an integrated customizable environment. Developed at NASAs Ames Research Center (ARC), in collaboration with NASAs Advanced Multimission Operations System (AMMOS) and NASAs Jet Propulsion Laboratory (JPL), Open MCT is getting its first mission use on the Jason 3 Mission, and is also available in the testbed for the Mars 2020 Rover and for development use for NASAs Resource Prospector Lunar Rover. The open source nature of the project provides for use outside of space missions, including open source contributions from a community of users. The defining features of Open MCT for mission users are data integration, end user composition and multiple views. Data integration provides access to mission data across domains in one place, making data such as activities, timelines, telemetry, imagery, event timers and procedures available in one place, without application switching. End user composition provides users with layouts, which act as a canvas to assemble visualisations. Multiple views provide the capability to view the same data in different ways, with live switching of data views in place. Open MCT is browser based, and works on the desktop as well as tablets and phones, providing access to data anywhere. An early use case for mobile data access took place on the Resource Prospector (RP) Mission Distributed Operations Test, in which rover engineers in the field were able to view telemetry on their phones. We envision this capability providing decision support to on console operators from off duty personnel. The plug-in architecture also allows for adaptation for different mission capabilities. Different data types and capabilities may be added or removed using plugins. An API provides a means to write new capabilities and to create data adaptors. Data plugins exist for mission data sources for NASA missions. Adaptors have been written by international and commercial users. Open MCT is open source. Open source enables collaborative development across organizations and also makes the product available outside of the space community, providing a potential source of usage and ideas to drive product design and development. The combination of open source with an Apache 2 license, and distribution on GitHub, has enabled an active community of users and contributors. The spectrum of users for Open MCT is, to our knowledge, unprecedented for mission software. In addition to our NASA users, we have, through open source, had users and inquires on projects ranging from Internet of Things, to radio hobbyists, to farming projects. We have an active community of contributors, enabling a flow of ideas inside and outside of the space community.

  12. Dedicated computer system AOTK for image processing and analysis of horse navicular bone

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Fojud, A.; Koszela, K.; Mueller, W.; Górna, K.; Okoń, P.; Piekarska-Boniecka, H.

    2017-07-01

    The aim of the research was made the dedicated application AOTK (pol. Analiza Obrazu Trzeszczki Kopytowej) for image processing and analysis of horse navicular bone. The application was produced by using specialized software like Visual Studio 2013 and the .NET platform. To implement algorithms of image processing and analysis were used libraries of Aforge.NET. Implemented algorithms enabling accurate extraction of the characteristics of navicular bones and saving data to external files. Implemented in AOTK modules allowing the calculations of distance selected by user, preliminary assessment of conservation of structure of the examined objects. The application interface is designed in a way that ensures user the best possible view of the analyzed images.

  13. Learning to recognize objects on the fly: a neurally based dynamic field approach.

    PubMed

    Faubel, Christian; Schöner, Gregor

    2008-05-01

    Autonomous robots interacting with human users need to build and continuously update scene representations. This entails the problem of rapidly learning to recognize new objects under user guidance. Based on analogies with human visual working memory, we propose a dynamical field architecture, in which localized peaks of activation represent objects over a small number of simple feature dimensions. Learning consists of laying down memory traces of such peaks. We implement the dynamical field model on a service robot and demonstrate how it learns 30 objects from a very small number of views (about 5 per object are sufficient). We also illustrate how properties of feature binding emerge from this framework.

  14. User experience while viewing stereoscopic 3D television

    PubMed Central

    Read, Jenny C.A.; Bohr, Iwo

    2014-01-01

    3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the ‘nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D. Practitioner Summary: Stereoscopic 3D (S3D) has been linked to visual discomfort and fatigue. Viewers watched the same movie in either 2D or stereo 3D (between-subjects design). Around 14% reported effects such as headache and eyestrain linked to S3D itself, while 8% report adverse effects attributable to 3D glasses or negative expectations. PMID:24874550

  15. Three visualization approaches for communicating and exploring PIT tag data

    USGS Publications Warehouse

    Letcher, Benjamin; Walker, Jeffrey D.; O'Donnell, Matthew; Whiteley, Andrew R.; Nislow, Keith; Coombs, Jason

    2018-01-01

    As the number, size and complexity of ecological datasets has increased, narrative and interactive raw data visualizations have emerged as important tools for exploring and understanding these large datasets. As a demonstration, we developed three visualizations to communicate and explore passive integrated transponder tag data from two long-term field studies. We created three independent visualizations for the same dataset, allowing separate entry points for users with different goals and experience levels. The first visualization uses a narrative approach to introduce users to the study. The second visualization provides interactive cross-filters that allow users to explore multi-variate relationships in the dataset. The last visualization allows users to visualize the movement histories of individual fish within the stream network. This suite of visualization tools allows a progressive discovery of more detailed information and should make the data accessible to users with a wide variety of backgrounds and interests.

  16. Visualizing NetCDF Files by Using the EverVIEW Data Viewer

    USGS Publications Warehouse

    Conzelmann, Craig; Romañach, Stephanie S.

    2010-01-01

    Over the past few years, modelers in South Florida have started using Network Common Data Form (NetCDF) as the standard data container format for storing hydrologic and ecologic modeling inputs and outputs. With its origins in the meteorological discipline, NetCDF was created by the Unidata Program Center at the University Corporation for Atmospheric Research, in conjunction with the National Aeronautics and Space Administration and other organizations. NetCDF is a portable, scalable, self-describing, binary file format optimized for storing array-based scientific data. Despite attributes which make NetCDF desirable to the modeling community, many natural resource managers have few desktop software packages which can consume NetCDF and unlock the valuable data contained within. The U.S. Geological Survey and the Joint Ecosystem Modeling group, an ecological modeling community of practice, are working to address this need with the EverVIEW Data Viewer. Available for several operating systems, this desktop software currently supports graphical displays of NetCDF data as spatial overlays on a three-dimensional globe and views of grid-cell values in tabular form. An included Open Geospatial Consortium compliant, Web-mapping service client and charting interface allows the user to view Web-available spatial data as additional map overlays and provides simple charting visualizations of NetCDF grid values.

  17. Global Visualization (GloVis) Viewer

    USGS Publications Warehouse

    ,

    2005-01-01

    GloVis (http://glovis.usgs.gov) is a browse image-based search and order tool that can be used to quickly review the land remote sensing data inventories held at the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS). GloVis was funded by the AmericaView project to reduce the difficulty of identifying and acquiring data for user-defined study areas. Updated daily with the most recent satellite acquisitions, GloVis displays data in a mosaic, allowing users to select any area of interest worldwide and immediately view all available browse images for the following Landsat data sets: Multispectral Scanner (MSS), Multi-Resolution Land Characteristics (MRLC), Orthorectified, Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and ETM+ Scan Line Corrector-off (SLC-off). Other data sets include Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Moderate Resolution Imaging Spectroradiometer (MODIS), Aqua MODIS, and the Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion data.

  18. Acceptance of direct physician access to a computer-based patient record in a managed care setting.

    PubMed

    Dewey, J B; Manning, P; Brandt, S

    1993-01-01

    Kaiser Permanente Mid-Atlantic States has developed a fully integrated outpatient information system which currently runs on an IBM ES9000 on a VM platform written in MUMPS. The applications include Lab, Radiology, Transcription, Appointments. Pharmacy, Encounter tracking, Hospitalizations, Referrals, Phone Advice, Pap tracking, Problem list, Immunization tracking, and Patient demographics. They are department specific and require input and output from a dumb terminal. We have developed a physician's work station to access this information using PC compatible computers running Microsoft Windows and a custom Microsoft Visual Basic 2.0 environment which draws from these 14 applications giving the physician a comprehensive view of all electronic medical records. Through rapid prototyping, voluntary participation, formal training and gradual implementation we have created an enthusiastic response. 95% of our physician PC users access the system each month. The use ranges from 0.2 to 3.0 screens of data viewed per patient visit. This response continues to drive the process toward still greater user acceptance and further practice enhancement.

  19. XEphem: Interactive Astronomical Ephemeris

    NASA Astrophysics Data System (ADS)

    Downey, Elwood Charles

    2011-12-01

    XEphem is a scientific-grade interactive astronomical ephemeris package for UNIX-like systems. Written in C, X11 and Motif, it is easily ported to systems. Among other things, XEphem: computes heliocentric, geocentric and topocentric information for all objects; has built-in support for all planets; the moons of Mars, Jupiter, Saturn, Uranus and Earth; central meridian longitude of Mars and Jupiter; Saturn's rings; and Jupiter's Great Red Spot; allows user-defined objects including stars, deepsky objects, asteroids, comets and Earth satellites; provides special efficient handling of large catalogs including Tycho, Hipparcos, GSC; displays data in configurable tabular formats in conjunction with several interactive graphical views; displays a night-at-a-glance 24 hour graphic showing when any selected objects are up; displays 3-D stereo Solar System views that are particularly well suited for visualizing comet trajectories; quickly finds all close pairs of objects in the sky; and sorts and prints all catalogs with very flexible criteria for creating custom observing lists. Its capabilities are listed more fully in the user manual introduction.

  20. Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics

    NASA Astrophysics Data System (ADS)

    Hošek, Petr; Spiwok, Vojtěch

    2016-01-01

    Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.

  1. Visual object recognition for mobile tourist information systems

    NASA Astrophysics Data System (ADS)

    Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander

    2005-03-01

    We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.

  2. The RCSB protein data bank: integrative view of protein, gene and 3D structural information

    PubMed Central

    Rose, Peter W.; Prlić, Andreas; Altunkaya, Ali; Bi, Chunxiao; Bradley, Anthony R.; Christie, Cole H.; Costanzo, Luigi Di; Duarte, Jose M.; Dutta, Shuchismita; Feng, Zukang; Green, Rachel Kramer; Goodsell, David S.; Hudson, Brian; Kalro, Tara; Lowe, Robert; Peisach, Ezra; Randle, Christopher; Rose, Alexander S.; Shao, Chenghua; Tao, Yi-Ping; Valasatava, Yana; Voigt, Maria; Westbrook, John D.; Woo, Jesse; Yang, Huangwang; Young, Jasmine Y.; Zardecki, Christine; Berman, Helen M.; Burley, Stephen K.

    2017-01-01

    The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB, http://rcsb.org), the US data center for the global PDB archive, makes PDB data freely available to all users, from structural biologists to computational biologists and beyond. New tools and resources have been added to the RCSB PDB web portal in support of a ‘Structural View of Biology.’ Recent developments have improved the User experience, including the high-speed NGL Viewer that provides 3D molecular visualization in any web browser, improved support for data file download and enhanced organization of website pages for query, reporting and individual structure exploration. Structure validation information is now visible for all archival entries. PDB data have been integrated with external biological resources, including chromosomal position within the human genome; protein modifications; and metabolic pathways. PDB-101 educational materials have been reorganized into a searchable website and expanded to include new features such as the Geis Digital Archive. PMID:27794042

  3. REVEAL: Reconstruction, Enhancement, Visualization, and Ergonomic Assessment for Laparoscopy

    DTIC Science & Technology

    2008-08-01

    measurable disparity shift. Such an endoscope can be used to generate a stereoscopic view for a surgeon, as with the DaVinci robot in use today...training or surgery. We are working on the user interface issues of incorporating this measurement capability into the standard set of tools during...scope use, and in structuring a set of tasks around the use of through-the-scope measurement in order to determine how this tool can affect efficiency

  4. CyanoBase: the cyanobacteria genome database update 2010.

    PubMed

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly.

  5. Distributed file management for remote clinical image-viewing stations

    NASA Astrophysics Data System (ADS)

    Ligier, Yves; Ratib, Osman M.; Girard, Christian; Logean, Marianne; Trayser, Gerhard

    1996-05-01

    The Geneva PACS is based on a distributed architecture, with different archive servers used to store all the image files produced by digital imaging modalities. Images can then be visualized on different display stations with the Osiris software. Image visualization require to have the image file physically present on the local station. Thus, images must be transferred from archive servers to local display stations in an acceptable way, which means fast and user friendly where the notion of file must be hidden to users. The transfer of image files is done according to different schemes including prefetching and direct image selection. Prefetching allows the retrieval of previous studies of a patient in advance. A direct image selection is also provided in order to retrieve images on request. When images are transferred locally on the display station, they are stored in Papyrus files, each file containing a set of images. File names are used by the Osiris viewing software to open image sequences. But file names alone are not explicit enough to properly describe the content of the file. A specific utility has been developed to present a list of patients, and for each patient a list of exams which can be selected and automatically displayed. The system has been successfully tested in different clinical environments. It will be soon extended on a hospital wide basis.

  6. A mobile phone system to find crosswalks for visually impaired pedestrians

    PubMed Central

    Shen, Huiying; Chan, Kee-Yip; Coughlan, James; Brabyn, John

    2010-01-01

    Urban intersections are the most dangerous parts of a blind or visually impaired pedestrian’s travel. A prerequisite for safely crossing an intersection is entering the crosswalk in the right direction and avoiding the danger of straying outside the crosswalk. This paper presents a proof of concept system that seeks to provide such alignment information. The system consists of a standard mobile phone with built-in camera that uses computer vision algorithms to detect any crosswalk visible in the camera’s field of view; audio feedback from the phone then helps the user align him/herself to it. Our prototype implementation on a Nokia mobile phone runs in about one second per image, and is intended for eventual use in a mobile phone system that will aid blind and visually impaired pedestrians in navigating traffic intersections. PMID:20411035

  7. Vector representation of user's view using self-organizing map

    NASA Astrophysics Data System (ADS)

    Ae, Tadashi; Yamaguchi, Tomohisa; Monden, Eri; Kawabata, Shunji; Kamitani, Motoki

    2004-05-01

    There exist various objects, such as pictures, music, texts, etc., around our environment. We have a view for these objects by looking, reading or listening. Our view is concerned with our behaviors deeply, and is very important to understand our behaviors. Therefore, we propose a method which acquires a view as a vector, and apply the vector to sequence generation. We focus on sequences of the data of which a user selects from a multimedia database containing pictures, music, movie, etc.. These data cannot be stereotyped because user's view for them changes by each user. Therefore, we represent the structure of the multimedia database as the vector representing user's view and the stereotyped vector, and acquire sequences containing the structure as elements. We demonstrate a city-sequence generation system which reflects user's intension as an application of sequence generation containing user's view. We apply the self-organizing map to this system to represent user's view.

  8. Distributed Observer Network

    NASA Technical Reports Server (NTRS)

    2008-01-01

    NASA s advanced visual simulations are essential for analyses associated with life cycle planning, design, training, testing, operations, and evaluation. Kennedy Space Center, in particular, uses simulations for ground services and space exploration planning in an effort to reduce risk and costs while improving safety and performance. However, it has been difficult to circulate and share the results of simulation tools among the field centers, and distance and travel expenses have made timely collaboration even harder. In response, NASA joined with Valador Inc. to develop the Distributed Observer Network (DON), a collaborative environment that leverages game technology to bring 3-D simulations to conventional desktop and laptop computers. DON enables teams of engineers working on design and operations to view and collaborate on 3-D representations of data generated by authoritative tools. DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3-D visual environment. Multiple widely dispersed users, working individually or in groups, can view and analyze simulation results on desktop and laptop computers in real time.

  9. Developing an educational curriculum for EnviroAtlas ...

    EPA Pesticide Factsheets

    EnviroAtlas is a web-based tool developed by the EPA and its partners, which provides interactive tools and resources for users to explore the benefits that people receive from nature, often referred to as ecosystem goods and services.Ecosystem goods and services are important to human health and well-being. Using EnviroAtlas, users can access, view, and analyze diverse information to better understand the potential impacts of decisions. EnviroAtlas provides two primary tools, the Interactive Map and the Eco-Health Relationship Browser. EnviroAtlas integrates geospatial data from a variety of sources so that users can visualize the impacts of decision-making on ecosystems. The Interactive Map allows users to investigate various ecosystem elements (i.e. land cover, pollution, and community development) and compare them across localities in the United States. The best part of the Interactive Map is that it does not require specialized software for map application; rather, it requires only a computer and an internet connection. As such, it can be used as a powerful educational tool. The Eco-Health Relationship Browser is also a web-based, highly interactive tool that uses existing scientific literature to visually demonstrate the connections between the environment and human health.As an ASPPH/EPA Fellow with a background in environmental science and secondary science education, I am currently developing an educational curriculum to support the EnviroAtlas to

  10. Multidimensional display controller for displaying to a user an aspect of a multidimensional space visible from a base viewing location along a desired viewing orientation

    DOEpatents

    Davidson, George S.; Anderson, Thomas G.

    2001-01-01

    A display controller allows a user to control a base viewing location, a base viewing orientation, and a relative viewing orientation. The base viewing orientation and relative viewing orientation are combined to determine a desired viewing orientation. An aspect of a multidimensional space visible from the base viewing location along the desired viewing orientation is displayed to the user. The user can change the base viewing location, base viewing orientation, and relative viewing orientation by changing the location or other properties of input objects.

  11. Can Visualizing Document Space Improve Users' Information Foraging?

    ERIC Educational Resources Information Center

    Song, Min

    1998-01-01

    This study shows how users access relevant information in a visualized document space and determine whether BiblioMapper, a visualization tool, strengthens an information retrieval (IR) system and makes it more usable. BiblioMapper, developed for a CISI collection, was evaluated by accuracy, time, and user satisfaction. Users' navigation…

  12. World Wind 3D Earth Viewing

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick; Maxwell, Christopher; Kim, Randolph; Gaskins, Tom

    2007-01-01

    World Wind allows users to zoom from satellite altitude down to any place on Earth, leveraging high-resolution LandSat imagery and SRTM (Shuttle Radar Topography Mission) elevation data to experience Earth in visually rich 3D. In addition to Earth, World Wind can also visualize other planets, and there are already comprehensive data sets for Mars and the Earth's moon, which are as easily accessible as those of Earth. There have been more than 20 million downloads to date, and the software is being used heavily by the Department of Defense due to the code s ability to be extended and the evolution of the code courtesy of NASA and the user community. Primary features include the dynamic access to public domain imagery and its ease of use. All one needs to control World Wind is a two-button mouse. Additional guides and features can be accessed through a simplified menu. A JAVA version will be available soon. Navigation is automated with single clicks of a mouse, or by typing in any location to automatically zoom in to see it. The World Wind install package contains the necessary requirements such as the .NET runtime and managed DirectX library. World Wind can display combinations of data from a variety of sources, including Blue Marble, LandSat 7, SRTM, NASA Scientific Visualization Studio, GLOBE, and much more. A thorough list of features, the user manual, a key chart, and screen shots are available at http://worldwind.arc.nasa.gov.

  13. GRASP/Ada 95: Reverse Engineering Tools for Ada

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1996-01-01

    The GRASP/Ada project (Graphical Representations of Algorithms, Structures, and Processes for Ada) has successfully created and prototyped an algorithmic level graphical representation for Ada software, the Control Structure Diagram (CSD), and a new visualization for a fine-grained complexity metric called the Complexity Profile Graph (CPG). By synchronizing the CSD and the CPG, the CSD view of control structure, nesting, and source code is directly linked to the corresponding visualization of statement level complexity in the CPG. GRASP has been integrated with GNAT, the GNU Ada 95 Translator to provide a comprehensive graphical user interface and development environment for Ada 95. The user may view, edit, print, and compile source code as a CSD with no discernible addition to storage or computational overhead. The primary impetus for creation of the CSD was to improve the comprehension efficiency of Ada software and, as a result, improve reliability and reduce costs. The emphasis has been on the automatic generation of the CSD from Ada 95 source code to support reverse engineering and maintenance. The CSD has the potential to replace traditional prettyprinted Ada source code. The current update has focused on the design and implementation of a new Motif compliant user interface, and a new CSD generator consisting of a tagger and renderer. The Complexity Profile Graph (CPG) is based on a set of functions that describes the context, content, and the scaling for complexity on a statement by statement basis. When combined graphicafly, the result is a composite profile of complexity for the program unit. Ongoing research includes the development and refinement of the associated functions, and the development of the CPG generator prototype. The current Version 5.0 prototype provides the capability for the user to generate CSDs and CPGs from Ada 95 source code in a reverse engineering as well as forward engineering mode with a level of flexibility suitable for practical application. This report provides an overview of the GRASP/Ada project with an emphasis on the current update.

  14. Cotton QTLdb: a cotton QTL database for QTL analysis, visualization, and comparison between Gossypium hirsutum and G. hirsutum × G. barbadense populations.

    PubMed

    Said, Joseph I; Knapka, Joseph A; Song, Mingzhou; Zhang, Jinfa

    2015-08-01

    A specialized database currently containing more than 2200 QTL is established, which allows graphic presentation, visualization and submission of QTL. In cotton quantitative trait loci (QTL), studies are focused on intraspecific Gossypium hirsutum and interspecific G. hirsutum × G. barbadense populations. These two populations are commercially important for the textile industry and are evaluated for fiber quality, yield, seed quality, resistance, physiological, and morphological trait QTL. With meta-analysis data based on the vast amount of QTL studies in cotton it will be beneficial to organize the data into a functional database for the cotton community. Here we provide a tool for cotton researchers to visualize previously identified QTL and submit their own QTL to the Cotton QTLdb database. The database provides the user with the option of selecting various QTL trait types from either the G. hirsutum or G. hirsutum × G. barbadense populations. Based on the user's QTL trait selection, graphical representations of chromosomes of the population selected are displayed in publication ready images. The database also provides users with trait information on QTL, LOD scores, and explained phenotypic variances for all QTL selected. The CottonQTLdb database provides cotton geneticist and breeders with statistical data on cotton QTL previously identified and provides a visualization tool to view QTL positions on chromosomes. Currently the database (Release 1) contains 2274 QTLs, and succeeding QTL studies will be updated regularly by the curators and members of the cotton community that contribute their data to keep the database current. The database is accessible from http://www.cottonqtldb.org.

  15. Visualization and correction of automated segmentation, tracking and lineaging from 5-D stem cell image sequences.

    PubMed

    Wait, Eric; Winter, Mark; Bjornsson, Chris; Kokovay, Erzsebet; Wang, Yue; Goderie, Susan; Temple, Sally; Cohen, Andrew R

    2014-10-03

    Neural stem cells are motile and proliferative cells that undergo mitosis, dividing to produce daughter cells and ultimately generating differentiated neurons and glia. Understanding the mechanisms controlling neural stem cell proliferation and differentiation will play a key role in the emerging fields of regenerative medicine and cancer therapeutics. Stem cell studies in vitro from 2-D image data are well established. Visualizing and analyzing large three dimensional images of intact tissue is a challenging task. It becomes more difficult as the dimensionality of the image data increases to include time and additional fluorescence channels. There is a pressing need for 5-D image analysis and visualization tools to study cellular dynamics in the intact niche and to quantify the role that environmental factors play in determining cell fate. We present an application that integrates visualization and quantitative analysis of 5-D (x,y,z,t,channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. We combine unsupervised image analysis algorithms with an interactive visualization of the results. Our validation interface allows for each data set to be corrected to 100% accuracy, ensuring that downstream data analysis is accurate and verifiable. Our tool is the first to combine all of these aspects, leveraging the synergies obtained by utilizing validation information from stereo visualization to improve the low level image processing tasks.

  16. SU-F-J-72: A Clinical Usable Integrated Contouring Quality Evaluation Software for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, S; Dolly, S; Cai, B

    Purpose: To introduce the Auto Contour Evaluation (ACE) software, which is the clinical usable, user friendly, efficient and all-in-one toolbox for automatically identify common contouring errors in radiotherapy treatment planning using supervised machine learning techniques. Methods: ACE is developed with C# using Microsoft .Net framework and Windows Presentation Foundation (WPF) for elegant GUI design and smooth GUI transition animations through the integration of graphics engines and high dots per inch (DPI) settings on modern high resolution monitors. The industrial standard software design pattern, Model-View-ViewModel (MVVM) pattern, is chosen to be the major architecture of ACE for neat coding structure, deepmore » modularization, easy maintainability and seamless communication with other clinical software. ACE consists of 1) a patient data importing module integrated with clinical patient database server, 2) a 2D DICOM image and RT structure simultaneously displaying module, 3) a 3D RT structure visualization module using Visualization Toolkit or VTK library and 4) a contour evaluation module using supervised pattern recognition algorithms to detect contouring errors and display detection results. ACE relies on supervised learning algorithms to handle all image processing and data processing jobs. Implementations of related algorithms are powered by Accord.Net scientific computing library for better efficiency and effectiveness. Results: ACE can take patient’s CT images and RT structures from commercial treatment planning software via direct user input or from patients’ database. All functionalities including 2D and 3D image visualization and RT contours error detection have been demonstrated with real clinical patient cases. Conclusion: ACE implements supervised learning algorithms and combines image processing and graphical visualization modules for RT contours verification. ACE has great potential for automated radiotherapy contouring quality verification. Structured with MVVM pattern, it is highly maintainable and extensible, and support smooth connections with other clinical software tools.« less

  17. Scribl: an HTML5 Canvas-based graphics library for visualizing genomic data over the web

    PubMed Central

    Miller, Chase A.; Anthony, Jon; Meyer, Michelle M.; Marth, Gabor

    2013-01-01

    Motivation: High-throughput biological research requires simultaneous visualization as well as analysis of genomic data, e.g. read alignments, variant calls and genomic annotations. Traditionally, such integrative analysis required desktop applications operating on locally stored data. Many current terabyte-size datasets generated by large public consortia projects, however, are already only feasibly stored at specialist genome analysis centers. As even small laboratories can afford very large datasets, local storage and analysis are becoming increasingly limiting, and it is likely that most such datasets will soon be stored remotely, e.g. in the cloud. These developments will require web-based tools that enable users to access, analyze and view vast remotely stored data with a level of sophistication and interactivity that approximates desktop applications. As rapidly dropping cost enables researchers to collect data intended to answer questions in very specialized contexts, developers must also provide software libraries that empower users to implement customized data analyses and data views for their particular application. Such specialized, yet lightweight, applications would empower scientists to better answer specific biological questions than possible with general-purpose genome browsers currently available. Results: Using recent advances in core web technologies (HTML5), we developed Scribl, a flexible genomic visualization library specifically targeting coordinate-based data such as genomic features, DNA sequence and genetic variants. Scribl simplifies the development of sophisticated web-based graphical tools that approach the dynamism and interactivity of desktop applications. Availability and implementation: Software is freely available online at http://chmille4.github.com/Scribl/ and is implemented in JavaScript with all modern browsers supported. Contact: gabor.marth@bc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23172864

  18. Early warning of active fire hotspots through NASA FIRMS fire information system

    NASA Astrophysics Data System (ADS)

    Ilavajhala, S.; Davies, D.; Schmaltz, J. E.; Murphy, K. J.

    2014-12-01

    Forest fires and wildfires can threaten ecosystems, wildlife, property, and often, large swaths of populations. Early warning of active fire hotspots plays a crucial role in planning, managing, and mitigating the damaging effects of wildfires. The NASA Fire Information for Resource Management System (FIRMS) has been providing active fire location information to users in easy-to-use formats for the better part of last decade, with a view to improving the alerting mechanisms and response times to fight forest and wildfires. FIRMS utilizes fires flagged as hotspots by the MODIS instrument flying aboard the Aqua and Terra satellites and sends early warning of detected hotspots via email in near real-time or as daily and weekly summaries. The email alerts can also be customized to send alerts for a particular region of interest, a country, or a specific protected area or park. In addition, a web mapping component, named "Web Fire Mapper" helps query and visualize hotspots. A newer version of Web Fire Mapper is being developed to enhance the existing visualization and alerting capabilities. Plans include supporting near real-time imagery from Aqua and Terra satellites to provide a more helpful context while viewing fires. Plans are also underway to upgrade the email alerts system to provide mobile-formatted messages and short text messages (SMS). The newer version of FIRMS will also allow users to obtain geo-located image snapshots, which can be imported into local GIS software by stakeholders to help further analyses. This talk will discuss the FIRMS system, its enhancements and its role in helping map, alert, and monitor fire hotspots by providing quick data visualization, querying, and download capabilities.

  19. Perceived Ownership of Avatars Influences Visual Perspective Taking

    PubMed Central

    Böffel, Christian; Müsseler, Jochen

    2018-01-01

    Modern computer-based applications often require the user to interact with avatars. Depending on the task at hand, spatial dissociation between the orientations of the user and the avatars might arise. As a consequence, the user has to adopt the avatar’s perspective and identify herself/himself with the avatar, possibly changing the user’s self-representation in the process. The present study aims to identify the conditions that benefit this change of perspective with objective performance measures and subjective self-estimations by integrating the idea of avatar-ownership into the cognitive phenomenon of spatial compatibility. Two different instructions were used to manipulate a user’s perceived ownership of an avatar in otherwise identical situations. Users with the high-ownership instruction reported higher levels of perceived ownership of the avatar and showed larger spatial compatibility effects from the avatar’s point of view in comparison to the low ownership instruction. This supports the hypothesis that perceived ownership benefits perspective taking. PMID:29887816

  20. "Transformation Tuesday": Temporal context and post valence influence the provision of social support on social media.

    PubMed

    Vogel, Erin A; Rose, Jason P; Crane, Chantal

    2018-01-01

    Social network sites (SNSs) such as Facebook have become integral in the development and maintenance of interpersonal relationships. Users of SNSs seek social support and validation, often using posts that illustrate how they have changed over time. The purpose of the present research is to examine how the valence and temporal context of an SNS post affect the likelihood of other users providing social support. Participants viewed hypothetical SNS posts and reported their intentions to provide social support to the users. Results revealed that participants were more likely to provide social support for posts that were positive and included temporal context (i.e., depicted improvement over time; Study 1). Furthermore, this research suggests that visual representations of change over time are needed to elicit social support (Study 2). Results are discussed in terms of their practical implications for SNS users and theoretical implications for the literature on social support and social media.

  1. Modeling and evaluating user behavior in exploratory visual analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, Khairi; Johnson, Andrew E.; Papka, Michael E.

    Empirical evaluation methods for visualizations have traditionally focused on assessing the outcome of the visual analytic process as opposed to characterizing how that process unfolds. There are only a handful of methods that can be used to systematically study how people use visualizations, making it difficult for researchers to capture and characterize the subtlety of cognitive and interaction behaviors users exhibit during visual analysis. To validate and improve visualization design, however, it is important for researchers to be able to assess and understand how users interact with visualization systems under realistic scenarios. This paper presents a methodology for modeling andmore » evaluating the behavior of users in exploratory visual analysis. We model visual exploration using a Markov chain process comprising transitions between mental, interaction, and computational states. These states and the transitions between them can be deduced from a variety of sources, including verbal transcripts, videos and audio recordings, and log files. This model enables the evaluator to characterize the cognitive and computational processes that are essential to insight acquisition in exploratory visual analysis, and reconstruct the dynamics of interaction between the user and the visualization system. We illustrate this model with two exemplar user studies, and demonstrate the qualitative and quantitative analytical tools it affords.« less

  2. Using commodity accelerometers and gyroscopes to improve speed and accuracy of JanusVF

    NASA Astrophysics Data System (ADS)

    Hutson, Malcolm; Reiners, Dirk

    2010-01-01

    Several critical limitations exist in the currently available commercial tracking technologies for fully-enclosed virtual reality (VR) systems. While several 6DOF solutions can be adapted to work in fully-enclosed spaces, they still include elements of hardware that can interfere with the user's visual experience. JanusVF introduced a tracking solution for fully-enclosed VR displays that achieves comparable performance to available commercial solutions but without artifacts that can obscure the user's view. JanusVF employs a small, high-resolution camera that is worn on the user's head, but faces backwards. The VR rendering software draws specific fiducial markers with known size and absolute position inside the VR scene behind the user but in view of the camera. These fiducials are tracked by ARToolkitPlus and integrated by a single-constraint-at-a-time (SCAAT) filter to update the head pose. In this paper we investigate the addition of low-cost accelerometers and gyroscopes such as those in Nintendo Wii remotes, the Wii Motion Plus, and the Sony Sixaxis controller to improve the precision and accuracy of JanusVF. Several enthusiast projects have implemented these units as basic trackers or for gesture recognition, but none so far have created true 6DOF trackers using only the accelerometers and gyroscopes. Our original experiments were repeated after adding the low-cost inertial sensors, showing considerable improvements and noise reduction.

  3. A Grammar-based Approach for Modeling User Interactions and Generating Suggestions During the Data Exploration Process.

    PubMed

    Dabek, Filip; Caban, Jesus J

    2017-01-01

    Despite the recent popularity of visual analytics focusing on big data, little is known about how to support users that use visualization techniques to explore multi-dimensional datasets and accomplish specific tasks. Our lack of models that can assist end-users during the data exploration process has made it challenging to learn from the user's interactive and analytical process. The ability to model how a user interacts with a specific visualization technique and what difficulties they face are paramount in supporting individuals with discovering new patterns within their complex datasets. This paper introduces the notion of visualization systems understanding and modeling user interactions with the intent of guiding a user through a task thereby enhancing visual data exploration. The challenges faced and the necessary future steps to take are discussed; and to provide a working example, a grammar-based model is presented that can learn from user interactions, determine the common patterns among a number of subjects using a K-Reversible algorithm, build a set of rules, and apply those rules in the form of suggestions to new users with the goal of guiding them along their visual analytic process. A formal evaluation study with 300 subjects was performed showing that our grammar-based model is effective at capturing the interactive process followed by users and that further research in this area has the potential to positively impact how users interact with a visualization system.

  4. Gaia Data Release 1. The archive visualisation service

    NASA Astrophysics Data System (ADS)

    Moitinho, A.; Krone-Martins, A.; Savietto, H.; Barros, M.; Barata, C.; Falcão, A. J.; Fernandes, T.; Alves, J.; Silva, A. F.; Gomes, M.; Bakker, J.; Brown, A. G. A.; González-Núñez, J.; Gracia-Abril, G.; Gutiérrez-Sánchez, R.; Hernández, J.; Jordan, S.; Luri, X.; Merin, B.; Mignard, F.; Mora, A.; Navarro, V.; O'Mullane, W.; Sagristà Sellés, T.; Salgado, J.; Segovia, J. C.; Utrilla, E.; Arenou, F.; de Bruijne, J. H. J.; Jansen, F.; McCaughrean, M.; O'Flaherty, K. S.; Taylor, M. B.; Vallenari, A.

    2017-09-01

    Context. The first Gaia data release (DR1) delivered a catalogue of astrometry and photometry for over a billion astronomical sources. Within the panoplyof methods used for data exploration, visualisation is often the starting point and even the guiding reference for scientific thought. However, this is a volume of data that cannot be efficiently explored using traditional tools, techniques, and habits. Aims: We aim to provide a global visual exploration service for the Gaia archive, something that is not possible out of the box for most people. The service has two main goals. The first is to provide a software platform for interactive visual exploration of the archive contents, using common personal computers and mobile devices available to most users. The second aim is to produce intelligible and appealing visual representations of the enormous information content of the archive. Methods: The interactive exploration service follows a client-server design. The server runs close to the data, at the archive, and is responsible for hiding as far as possible the complexity and volume of the Gaia data from the client. This is achieved by serving visual detail on demand. Levels of detail are pre-computed using data aggregation and subsampling techniques. For DR1, the client is a web application that provides an interactive multi-panel visualisation workspace as well as a graphical user interface. Results: The Gaia archive Visualisation Service offers a web-based multi-panel interactive visualisation desktop in a browser tab. It currently provides highly configurable 1D histograms and 2D scatter plots of Gaia DR1 and the Tycho-Gaia Astrometric Solution (TGAS) with linked views. An innovative feature is the creation of ADQL queries from visually defined regions in plots. These visual queries are ready for use in the Gaia Archive Search/data retrieval service. In addition, regions around user-selected objects can be further examined with automatically generated SIMBAD searches. Integration of the Aladin Lite and JS9 applications add support to the visualisation of HiPS and FITS maps. The production of the all-sky source density map that became the iconic image of Gaia DR1 is described in detail. Conclusions: On the day of DR1, over seven thousand users accessed the Gaia Archive visualisation portal. The system, running on a single machine, proved robust and did not fail while enabling thousands of users to visualise and explore the over one billion sources in DR1. There are still several limitations, most noticeably that users may only choose from a list of pre-computed visualisations. Thus, other visualisation applications that can complement the archive service are examined. Finally, development plans for Data Release 2 are presented.

  5. Integration Head Mounted Display Device and Hand Motion Gesture Device for Virtual Reality Laboratory

    NASA Astrophysics Data System (ADS)

    Rengganis, Y. A.; Safrodin, M.; Sukaridhoto, S.

    2018-01-01

    Virtual Reality Laboratory (VR Lab) is an innovation for conventional learning media which show us whole learning process in laboratory. There are many tools and materials are needed by user for doing practical in it, so user could feel new learning atmosphere by using this innovation. Nowadays, technologies more sophisticated than before. So it would carry in education and it will be more effective, efficient. The Supported technologies are needed us for making VR Lab such as head mounted display device and hand motion gesture device. The integration among them will be used us for making this research. Head mounted display device for viewing 3D environment of virtual reality laboratory. Hand motion gesture device for catching user real hand and it will be visualized in virtual reality laboratory. Virtual Reality will show us, if using the newest technologies in learning process it could make more interesting and easy to understand.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Richen; Guo, Hanqi; Yuan, Xiaoru

    Most of the existing approaches to visualize vector field ensembles are to reveal the uncertainty of individual variables, for example, statistics, variability, etc. However, a user-defined derived feature like vortex or air mass is also quite significant, since they make more sense to domain scientists. In this paper, we present a new framework to extract user-defined derived features from different simulation runs. Specially, we use a detail-to-overview searching scheme to help extract vortex with a user-defined shape. We further compute the geometry information including the size, the geo-spatial location of the extracted vortexes. We also design some linked views tomore » compare them between different runs. At last, the temporal information such as the occurrence time of the feature is further estimated and compared. Results show that our method is capable of extracting the features across different runs and comparing them spatially and temporally.« less

  7. Klusters, NeuroScope, NDManager: a free software suite for neurophysiological data processing and visualization.

    PubMed

    Hazan, Lynn; Zugaro, Michaël; Buzsáki, György

    2006-09-15

    Recent technological advances now allow for simultaneous recording of large populations of anatomically distributed neurons in behaving animals. The free software package described here was designed to help neurophysiologists process and view recorded data in an efficient and user-friendly manner. This package consists of several well-integrated applications, including NeuroScope (http://neuroscope.sourceforce.net), an advanced viewer for electrophysiological and behavioral data with limited editing capabilities, Klusters (http://klusters.sourceforge.net), a graphical cluster cutting application for manual and semi-automatic spike sorting, NDManager (GPL,see http://www.gnu.org/licenses/gpl.html), an experimental parameter and data processing manager. All of these programs are distributed under the GNU General Public License (GPL, see ), which gives its users legal permission to copy, distribute and/or modify the software. Also included are extensive user manuals and sample data, as well as source code and documentation.

  8. Experiencing the Sights, Smells, Sounds, and Climate of Southern Italy in VR.

    PubMed

    Manghisi, Vito M; Fiorentino, Michele; Gattullo, Michele; Boccaccio, Antonio; Bevilacqua, Vitoantonio; Cascella, Giuseppe L; Dassisti, Michele; Uva, Antonio E

    2017-01-01

    This article explores what it takes to make interactive computer graphics and VR attractive as a promotional vehicle, from the points of view of tourism agencies and the tourists themselves. The authors exploited current VR and human-machine interface (HMI) technologies to develop an interactive, innovative, and attractive user experience called the Multisensory Apulia Touristic Experience (MATE). The MATE system implements a natural gesture-based interface and multisensory stimuli, including visuals, audio, smells, and climate effects.

  9. Integrated Cuing Requirements (ICR) Study: Demonstration Data Base and Users Guide.

    DTIC Science & Technology

    1983-07-01

    viewed with a servo-mounted televison camera and used to provide a visual scene for an observer in an ATD. Modulation: Mathematically, the absolute...i(b). CROSS REFERENCE The impact of stationary scene RESULTS. . details was also tested in this See (c) study. See Figure 33.5-1. Ial TEST APPARATUS...size. (See the discussion of * the impact of perceived distance on perceived size in Section 31._.) Figure 33.4-1 Perceived Distance and Velocity of Self

  10. CyanoBase: the cyanobacteria genome database update 2010

    PubMed Central

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly. PMID:19880388

  11. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    NASA Astrophysics Data System (ADS)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to fit to each other. Also the data selection may depend on the visualization task. Not only can the amplitude data be used but also different seismic attribute transformations. The development is supplemented by interviews, to analyse the efficiency and manageability of the stereoscopic workplace environment. Another point of investigation is the immersion, i.e. the increased concentration on the observed scene when passing through the data, triggered by the stereoscopic viewing. This effect is reinforced by a user interface which is so intuitive and simple that it does not draw attention away from the scene. For the seismic interpretation purpose the stereoscopic view supports the pattern recognition of geological structures and the detection of their spatial heterogeneity. These are topics which are relevant for the actual geothermal exploration in Germany.

  12. Three-dimensional user interfaces for scientific visualization

    NASA Technical Reports Server (NTRS)

    Vandam, Andries

    1995-01-01

    The main goal of this project is to develop novel and productive user interface techniques for creating and managing visualizations of computational fluid dynamics (CFD) datasets. We have implemented an application framework in which we can visualize computational fluid dynamics user interfaces. This UI technology allows users to interactively place visualization probes in a dataset and modify some of their parameters. We have also implemented a time-critical scheduling system which strives to maintain a constant frame-rate regardless of the number of visualization techniques. In the past year, we have published parts of this research at two conferences, the research annotation system at Visualization 1994, and the 3D user interface at UIST 1994. The real-time scheduling system has been submitted to SIGGRAPH 1995 conference. Copies of these documents are included with this report.

  13. Future of Hydroinformatics: Towards Open, Integrated and Interactive Online Platforms

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2012-12-01

    Hydroinformatics is a domain of science and technology dealing with the management of information in the field of hydrology (IWA, 2011). There is the need for innovative solutions to the challenges towards open information, integration, and communication in the Internet. This presentation provides an overview of the trends and challenges in the future of hydroinformatics, and demonstrates an information system, Iowa Flood Information System (IFIS), developed within the light of these challenges. The IFIS is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for more than 1000 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods.

  14. Linear Time Algorithms to Restrict Insider Access using Multi-Policy Access Control Systems

    PubMed Central

    Mell, Peter; Shook, James; Harang, Richard; Gavrila, Serban

    2017-01-01

    An important way to limit malicious insiders from distributing sensitive information is to as tightly as possible limit their access to information. This has always been the goal of access control mechanisms, but individual approaches have been shown to be inadequate. Ensemble approaches of multiple methods instantiated simultaneously have been shown to more tightly restrict access, but approaches to do so have had limited scalability (resulting in exponential calculations in some cases). In this work, we take the Next Generation Access Control (NGAC) approach standardized by the American National Standards Institute (ANSI) and demonstrate its scalability. The existing publicly available reference implementations all use cubic algorithms and thus NGAC was widely viewed as not scalable. The primary NGAC reference implementation took, for example, several minutes to simply display the set of files accessible to a user on a moderately sized system. In our approach, we take these cubic algorithms and make them linear. We do this by reformulating the set theoretic approach of the NGAC standard into a graph theoretic approach and then apply standard graph algorithms. We thus can answer important access control decision questions (e.g., which files are available to a user and which users can access a file) using linear time graph algorithms. We also provide a default linear time mechanism to visualize and review user access rights for an ensemble of access control mechanisms. Our visualization appears to be a simple file directory hierarchy but in reality is an automatically generated structure abstracted from the underlying access control graph that works with any set of simultaneously instantiated access control policies. It also provide an implicit mechanism for symbolic linking that provides a powerful access capability. Our work thus provides the first efficient implementation of NGAC while enabling user privilege review through a novel visualization approach. This may help transition from concept to reality the idea of using ensembles of simultaneously instantiated access control methodologies, thereby limiting insider threat. PMID:28758045

  15. Future View: Web Navigation based on Learning User's Browsing Strategy

    NASA Astrophysics Data System (ADS)

    Nagino, Norikatsu; Yamada, Seiji

    In this paper, we propose a Future View system that assists user's usual Web browsing. The Future View will prefetch Web pages based on user's browsing strategies and present them to a user in order to assist Web browsing. To learn user's browsing strategy, the Future View uses two types of learning classifier systems: a content-based classifier system for contents change patterns and an action-based classifier system for user's action patterns. The results of learning is applied to crawling by Web robots, and the gathered Web pages are presented to a user through a Web browser interface. We experimentally show effectiveness of navigation using the Future View.

  16. Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization

    DTIC Science & Technology

    2017-08-01

    visualization, 3-D interactive visualization, scientific visualization, virtual reality, real -time ray tracing 16. SECURITY CLASSIFICATION OF: 17...scientists to employ in the real world. Other than user-friendly software and hardware setup, scientists also need to be able to perform their usual...and scientific visualization communities mostly have different research priorities. For the VR community, the ability to support real -time user

  17. Accessibility of Mobile Devices for Visually Impaired Users: An Evaluation of the Screen-Reader VoiceOver.

    PubMed

    Smaradottir, Berglind; Håland, Jarle; Martinez, Santiago

    2017-01-01

    A mobile device's touchscreen allows users to use a choreography of hand gestures to interact with the user interface. A screen reader on a mobile device is designed to support the interaction of visually disabled users while using gestures. This paper presents an evaluation of VoiceOver, a screen reader in Apple Inc. products. The evaluation was a part of the research project "Visually impaired users touching the screen - a user evaluation of assistive technology".

  18. [Spatial domain display for interference image dataset].

    PubMed

    Wang, Cai-Ling; Li, Yu-Shan; Liu, Xue-Bin; Hu, Bing-Liang; Jing, Juan-Juan; Wen, Jia

    2011-11-01

    The requirements of imaging interferometer visualization is imminent for the user of image interpretation and information extraction. However, the conventional researches on visualization only focus on the spectral image dataset in spectral domain. Hence, the quick show of interference spectral image dataset display is one of the nodes in interference image processing. The conventional visualization of interference dataset chooses classical spectral image dataset display method after Fourier transformation. In the present paper, the problem of quick view of interferometer imager in image domain is addressed and the algorithm is proposed which simplifies the matter. The Fourier transformation is an obstacle since its computation time is very large and the complexion would be even deteriorated with the size of dataset increasing. The algorithm proposed, named interference weighted envelopes, makes the dataset divorced from transformation. The authors choose three interference weighted envelopes respectively based on the Fourier transformation, features of interference data and human visual system. After comparing the proposed with the conventional methods, the results show the huge difference in display time.

  19. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    PubMed

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  20. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System

    PubMed Central

    Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570

  1. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System.

    PubMed

    Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).

  2. LoopX: A Graphical User Interface-Based Database for Comprehensive Analysis and Comparative Evaluation of Loops from Protein Structures.

    PubMed

    Kadumuri, Rajashekar Varma; Vadrevu, Ramakrishna

    2017-10-01

    Due to their crucial role in function, folding, and stability, protein loops are being targeted for grafting/designing to create novel or alter existing functionality and improve stability and foldability. With a view to facilitate a thorough analysis and effectual search options for extracting and comparing loops for sequence and structural compatibility, we developed, LoopX a comprehensively compiled library of sequence and conformational features of ∼700,000 loops from protein structures. The database equipped with a graphical user interface is empowered with diverse query tools and search algorithms, with various rendering options to visualize the sequence- and structural-level information along with hydrogen bonding patterns, backbone φ, ψ dihedral angles of both the target and candidate loops. Two new features (i) conservation of the polar/nonpolar environment and (ii) conservation of sequence and conformation of specific residues within the loops have also been incorporated in the search and retrieval of compatible loops for a chosen target loop. Thus, the LoopX server not only serves as a database and visualization tool for sequence and structural analysis of protein loops but also aids in extracting and comparing candidate loops for a given target loop based on user-defined search options.

  3. Sketching Designs Using the Five Design-Sheet Methodology.

    PubMed

    Roberts, Jonathan C; Headleand, Chris; Ritsos, Panagiotis D

    2016-01-01

    Sketching designs has been shown to be a useful way of planning and considering alternative solutions. The use of lo-fidelity prototyping, especially paper-based sketching, can save time, money and converge to better solutions more quickly. However, this design process is often viewed to be too informal. Consequently users do not know how to manage their thoughts and ideas (to first think divergently, to then finally converge on a suitable solution). We present the Five Design Sheet (FdS) methodology. The methodology enables users to create information visualization interfaces through lo-fidelity methods. Users sketch and plan their ideas, helping them express different possibilities, think through these ideas to consider their potential effectiveness as solutions to the task (sheet 1); they create three principle designs (sheets 2,3 and 4); before converging on a final realization design that can then be implemented (sheet 5). In this article, we present (i) a review of the use of sketching as a planning method for visualization and the benefits of sketching, (ii) a detailed description of the Five Design Sheet (FdS) methodology, and (iii) an evaluation of the FdS using the System Usability Scale, along with a case-study of its use in industry and experience of its use in teaching.

  4. Developing Visualization Techniques for Semantics-based Information Networks

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Hall, David R.

    2003-01-01

    Information systems incorporating complex network structured information spaces with a semantic underpinning - such as hypermedia networks, semantic networks, topic maps, and concept maps - are being deployed to solve some of NASA s critical information management problems. This paper describes some of the human interaction and navigation problems associated with complex semantic information spaces and describes a set of new visual interface approaches to address these problems. A key strategy is to leverage semantic knowledge represented within these information spaces to construct abstractions and views that will be meaningful to the human user. Human-computer interaction methodologies will guide the development and evaluation of these approaches, which will benefit deployed NASA systems and also apply to information systems based on the emerging Semantic Web.

  5. Visualization of spiral ganglion neurites within the scala tympani with a cochlear implant in situ

    PubMed Central

    Chikar, Jennifer A.; Batts, Shelley A.; Pfingst, Bryan E.; Raphael, Yehoash

    2009-01-01

    Current cochlear histology methods do not allow in situ processing of cochlear implants. The metal components of the implant preclude standard embedding and mid-modiolar sectioning, and whole mounts do not have the spatial resolution needed to view the implant within the scala tympani. One focus of recent auditory research is the regeneration of structures within the cochlea, particularly the ganglion cells and their processes, and there are multiple potential benefits to cochlear implant users from this work. To facilitate experimental investigations of auditory nerve regeneration performed in conjunction with cochlear implantation, it is critical to visualize the cochlear tissue and the implant together to determine if the nerve has made contact with the implant. This paper presents a novel histological technique that enables simultaneous visualization of the in situ cochlear implant and neurofilament – labeled nerve processes within the scala tympani, and the spatial relationship between them. PMID:19428528

  6. Visualization of spiral ganglion neurites within the scala tympani with a cochlear implant in situ.

    PubMed

    Chikar, Jennifer A; Batts, Shelley A; Pfingst, Bryan E; Raphael, Yehoash

    2009-05-15

    Current cochlear histology methods do not allow in situ processing of cochlear implants. The metal components of the implant preclude standard embedding and mid-modiolar sectioning, and whole mounts do not have the spatial resolution needed to view the implant within the scala tympani. One focus of recent auditory research is the regeneration of structures within the cochlea, particularly the ganglion cells and their processes, and there are multiple potential benefits to cochlear implant users from this work. To facilitate experimental investigations of auditory nerve regeneration performed in conjunction with cochlear implantation, it is critical to visualize the cochlear tissue and the implant together to determine if the nerve has made contact with the implant. This paper presents a novel histological technique that enables simultaneous visualization of the in situ cochlear implant and neurofilament-labeled nerve processes within the scala tympani, and the spatial relationship between them.

  7. APEX_SCOPE: A graphical user interface for visualization of multi-modal data in inter-disciplinary studies.

    PubMed

    Kanbar, Lara J; Shalish, Wissam; Precup, Doina; Brown, Karen; Sant'Anna, Guilherme M; Kearney, Robert E

    2017-07-01

    In multi-disciplinary studies, different forms of data are often collected for analysis. For example, APEX, a study on the automated prediction of extubation readiness in extremely preterm infants, collects clinical parameters and cardiorespiratory signals. A variety of cardiorespiratory metrics are computed from these signals and used to assign a cardiorespiratory pattern at each time. In such a situation, exploratory analysis requires a visualization tool capable of displaying these different types of acquired and computed signals in an integrated environment. Thus, we developed APEX_SCOPE, a graphical tool for the visualization of multi-modal data comprising cardiorespiratory signals, automated cardiorespiratory metrics, automated respiratory patterns, manually classified respiratory patterns, and manual annotations by clinicians during data acquisition. This MATLAB-based application provides a means for collaborators to view combinations of signals to promote discussion, generate hypotheses and develop features.

  8. Software complex for geophysical data visualization

    NASA Astrophysics Data System (ADS)

    Kryukov, Ilya A.; Tyugin, Dmitry Y.; Kurkin, Andrey A.; Kurkina, Oxana E.

    2013-04-01

    The effectiveness of current research in geophysics is largely determined by the degree of implementation of the procedure of data processing and visualization with the use of modern information technology. Realistic and informative visualization of the results of three-dimensional modeling of geophysical processes contributes significantly into the naturalness of physical modeling and detailed view of the phenomena. The main difficulty in this case is to interpret the results of the calculations: it is necessary to be able to observe the various parameters of the three-dimensional models, build sections on different planes to evaluate certain characteristics and make a rapid assessment. Programs for interpretation and visualization of simulations are spread all over the world, for example, software systems such as ParaView, Golden Software Surfer, Voxler, Flow Vision and others. However, it is not always possible to solve the problem of visualization with the help of a single software package. Preprocessing, data transfer between the packages and setting up a uniform visualization style can turn into a long and routine work. In addition to this, sometimes special display modes for specific data are required and existing products tend to have more common features and are not always fully applicable to certain special cases. Rendering of dynamic data may require scripting languages that does not relieve the user from writing code. Therefore, the task was to develop a new and original software complex for the visualization of simulation results. Let us briefly list of the primary features that are developed. Software complex is a graphical application with a convenient and simple user interface that displays the results of the simulation. Complex is also able to interactively manage the image, resize the image without loss of quality, apply a two-dimensional and three-dimensional regular grid, set the coordinate axes with data labels and perform slice of data. The feature of geophysical data is their size. Detailed maps used in the simulations are large, thus rendering in real time can be difficult task even for powerful modern computers. Therefore, the performance of the software complex is an important aspect of this work. Complex is based on the latest version of graphic API: Microsoft - DirectX 11, which reduces overhead and harness the power of modern hardware. Each geophysical calculation is the adjustment of the mathematical model for a particular case, so the architecture of the complex visualization is created with the scalability and the ability to customize visualization objects, for better visibility and comfort. In the present study, software complex 'GeoVisual' was developed. One of the main features of this research is the use of bleeding-edge techniques of computer graphics in scientific visualization. The research was supported by The Ministry of education and science of Russian Federation, project 14.B37.21.0642.

  9. A parallel coordinates style interface for exploratory volume visualization.

    PubMed

    Tory, Melanie; Potts, Simeon; Möller, Torsten

    2005-01-01

    We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.

  10. Disaster relief through composite signatures

    NASA Astrophysics Data System (ADS)

    Hawley, Chadwick T.; Hyde, Brian; Carpenter, Tom; Nichols, Steve

    2012-06-01

    A composite signature is a group of signatures that are related in such a way to more completely or further define a target or operational endeavor at a higher fidelity. This paper builds on previous work developing innovative composite signatures associated with civil disasters, including physical, chemical and pattern/behavioral. For the composite signature approach to be successful it requires effective data fusion and visualization. This plays a key role in both preparedness and the response and recovery which are critical to saving lives. Visualization tools enhance the overall understanding of the crisis by pulling together and analyzing the data, and providing a clear and complete analysis of the information to the organizations/agencies dependant on it for a successful operation. An example of this, Freedom Web, is an easy-to-use data visualization and collaboration solution for use in homeland security, emergency preparedness, situational awareness, and event management. The solution provides a nationwide common operating picture for all levels of government through a web based, map interface. The tool was designed to be utilized by non-geospatial experts and is easily tailored to the specific needs of the users. Consisting of standard COTS and open source databases and a web server, users can view, edit, share, and highlight information easily and quickly through a standard internet browser.

  11. Intelligent Data Visualization for Cross-Checking Spacecraft System Diagnosis

    NASA Technical Reports Server (NTRS)

    Ong, James C.; Remolina, Emilio; Breeden, David; Stroozas, Brett A.; Mohammed, John L.

    2012-01-01

    Any reasoning system is fallible, so crew members and flight controllers must be able to cross-check automated diagnoses of spacecraft or habitat problems by considering alternate diagnoses and analyzing related evidence. Cross-checking improves diagnostic accuracy because people can apply information processing heuristics, pattern recognition techniques, and reasoning methods that the automated diagnostic system may not possess. Over time, cross-checking also enables crew members to become comfortable with how the diagnostic reasoning system performs, so the system can earn the crew s trust. We developed intelligent data visualization software that helps users cross-check automated diagnoses of system faults more effectively. The user interface displays scrollable arrays of timelines and time-series graphs, which are tightly integrated with an interactive, color-coded system schematic to show important spatial-temporal data patterns. Signal processing and rule-based diagnostic reasoning automatically identify alternate hypotheses and data patterns that support or rebut the original and alternate diagnoses. A color-coded matrix display summarizes the supporting or rebutting evidence for each diagnosis, and a drill-down capability enables crew members to quickly view graphs and timelines of the underlying data. This system demonstrates that modest amounts of diagnostic reasoning, combined with interactive, information-dense data visualizations, can accelerate system diagnosis and cross-checking.

  12. Distributed Energy Resources Customer Adoption Model - Graphical User Interface, Version 2.1.8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewald, Friedrich; Stadler, Michael; Cardoso, Goncalo F

    The DER-CAM Graphical User Interface has been redesigned to consist of a dynamic tree structure on the left side of the application window to allow users to quickly navigate between different data categories and views. Views can either be tables with model parameters and input data, the optimization results, or a graphical interface to draw circuit topology and visualize investment results. The model parameters and input data consist of tables where values are assigned to specific keys. The aggregation of all model parameters and input data amounts to the data required to build a DER-CAM model, and is passed tomore » the GAMS solver when users initiate the DER-CAM optimization process. Passing data to the GAMS solver relies on the use of a Java server that handles DER-CAM requests, queuing, and results delivery. This component of the DER-CAM GUI can be deployed either locally or remotely, and constitutes an intermediate step between the user data input and manipulation, and the execution of a DER-CAM optimization in the GAMS engine. The results view shows the results of the DER-CAM optimization and distinguishes between a single and a multi-objective process. The single optimization runs the DER-CAM optimization once and presents the results as a combination of summary charts and hourly dispatch profiles. The multi-objective optimization process consists of a sequence of runs initiated by the GUI, including: 1) CO2 minimization, 2) cost minimization, 3) a user defined number of points in-between objectives 1) and 2). The multi-objective results view includes both access to the detailed results of each point generated by the process as well as the generation of a Pareto Frontier graph to illustrate the trade-off between objectives. DER-CAM GUI 2.1.8 also introduces the ability to graphically generate circuit topologies, enabling support to DER-CAM 5.0.0. This feature consists of: 1) The drawing area, where users can manually create nodes and define their properties (e.g. point of common coupling, slack bus, load) and connect them through edges representing either power lines, transformers, or heat pipes, all with user defined characteristics (e.g., length, ampacity, inductance, or heat loss); 2) The tables, which display the user-defined topology in the final numerical form that will be passed to the DER-CAM optimization. Finally, the DER-CAM GUI is also deployed with a database schema that allows users to provide different energy load profiles, solar irradiance profiles, and tariff data, that can be stored locally and later used in any DER-CAM model. However, no real data will be delivered with this version.« less

  13. Stereoscopy in cinematographic synthetic imagery

    NASA Astrophysics Data System (ADS)

    Eisenmann, Jonathan; Parent, Rick

    2009-02-01

    In this paper we present experiments and results pertaining to the perception of depth in stereoscopic viewing of synthetic imagery. In computer animation, typical synthetic imagery is highly textured and uses stylized illumination of abstracted material models by abstracted light source models. While there have been numerous studies concerning stereoscopic capabilities, conventions for staging and cinematography in stereoscopic movies have not yet been well-established. Our long-term goal is to measure the effectiveness of various cinematography techniques on the human visual system in a theatrical viewing environment. We would like to identify the elements of stereoscopic cinema that are important in terms of enhancing the viewer's understanding of a scene as well as providing guidelines for the cinematographer relating to storytelling. In these experiments we isolated stereoscopic effects by eliminating as many other visual cues as is reasonable. In particular, we aim to empirically determine what types of movement in synthetic imagery affect the perceptual depth sensing capabilities of our viewers. Using synthetic imagery, we created several viewing scenarios in which the viewer is asked to locate a target object's depth in a simple environment. The scenarios were specifically designed to compare the effectiveness of stereo viewing, camera movement, and object motion in aiding depth perception. Data were collected showing the error between the choice of the user and the actual depth value, and patterns were identified that relate the test variables to the viewer's perceptual depth accuracy in our theatrical viewing environment.

  14. Beyond Control Panels: Direct Manipulation for Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Bradel, Lauren; North, Chris

    2013-07-19

    Information Visualization strives to provide visual representations through which users can think about and gain insight into information. By leveraging the visual and cognitive systems of humans, complex relationships and phenomena occurring within datasets can be uncovered by exploring information visually. Interaction metaphors for such visualizations are designed to enable users direct control over the filters, queries, and other parameters controlling how the data is visually represented. Through the evolution of information visualization, more complex mathematical and data analytic models are being used to visualize relationships and patterns in data – creating the field of Visual Analytics. However, the expectationsmore » for how users interact with these visualizations has remained largely unchanged – focused primarily on the direct manipulation of parameters of the underlying mathematical models. In this article we present an opportunity to evolve the methodology for user interaction from the direct manipulation of parameters through visual control panels, to interactions designed specifically for visual analytic systems. Instead of focusing on traditional direct manipulation of mathematical parameters, the evolution of the field can be realized through direct manipulation within the visual representation – where users can not only gain insight, but also interact. This article describes future directions and research challenges that fundamentally change the meaning of direct manipulation with regards to visual analytics, advancing the Science of Interaction.« less

  15. Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration

    PubMed Central

    Thorvaldsdóttir, Helga; Mesirov, Jill P.

    2013-01-01

    Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today’s sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license. PMID:22517427

  16. Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration.

    PubMed

    Thorvaldsdóttir, Helga; Robinson, James T; Mesirov, Jill P

    2013-03-01

    Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today's sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.

  17. Level-2 Milestone 4797: Early Users on Max, Sequoia Visualization Cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cupps, Kim C.

    This report documents the fact that an early user has run successfully on Max, the Sequoia visualization cluster, ASC L2 milestone 4797: Early Users on Sequoia Visualization System (Max), due December 31, 2013. The Max visualization and data analysis cluster will provide Sequoia users with compute cycles and an interactive option for data exploration and analysis. The system will be integrated in the first quarter of FY14 and the system is expected to be moved to the classified network by the second quarter of FY14. The goal of this milestone is to have early users running their visualization and datamore » analysis work on the Max cluster on the classified network.« less

  18. Social Image Captioning: Exploring Visual Attention and User Attention.

    PubMed

    Wang, Leiquan; Chu, Xiaoliang; Zhang, Weishan; Wei, Yiwei; Sun, Weichen; Wu, Chunlei

    2018-02-22

    Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image captioning. In this work, a dual attention model is proposed for social image captioning by combining the visual attention and user attention simultaneously.Visual attention is used to compress a large mount of salient visual information, while user attention is applied to adjust the description of the social images with user-contributed tags. Experiments conducted on the Microsoft (MS) COCO dataset demonstrate the superiority of the proposed method of dual attention.

  19. Social Image Captioning: Exploring Visual Attention and User Attention

    PubMed Central

    Chu, Xiaoliang; Zhang, Weishan; Wei, Yiwei; Sun, Weichen; Wu, Chunlei

    2018-01-01

    Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image captioning. In this work, a dual attention model is proposed for social image captioning by combining the visual attention and user attention simultaneously.Visual attention is used to compress a large mount of salient visual information, while user attention is applied to adjust the description of the social images with user-contributed tags. Experiments conducted on the Microsoft (MS) COCO dataset demonstrate the superiority of the proposed method of dual attention. PMID:29470409

  20. A main path domain map as digital library interface

    NASA Astrophysics Data System (ADS)

    Demaine, Jeffrey

    2009-01-01

    The shift to electronic publishing of scientific journals is an opportunity for the digital library to provide non-traditional ways of accessing the literature. One method is to use citation metadata drawn from a collection of electronic journals to generate maps of science. These maps visualize the communication patterns in the collection, giving the user an easy-tograsp view of the semantic structure underlying the scientific literature. For this visualization to be understandable the complexity of the citation network must be reduced through an algorithm. This paper describes the Citation Pathfinder application and its integration into a prototype digital library. This application generates small-scale citation networks that expand upon the search results of the digital library. These domain maps are linked to the collection, creating an interface that is based on the communication patterns in science. The Main Path Analysis technique is employed to simplify these networks into linear, sequential structures. By identifying patterns that characterize the evolution of the research field, Citation Pathfinder uses citations to give users a deeper understanding of the scientific literature.

  1. Head-mounted spatial instruments: Synthetic reality or impossible dream

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Grunwald, Arthur; Velger, Mordekhai

    1988-01-01

    A spatial instrument is defined as a display device which has been either geometrically or symbolically enhanced to better enable a user to accomplish a particular task. Research conducted over the past several years on 3-D spatial instruments has shown that perspective displays, even when viewed from the correct viewpoint, are subject to systematic viewer biases. These biases interfere with correct spatial judgements of the presented pictorial information. It is also found that deliberate, appropriate geometric distortion of the perspective projection of an image can improve user performance. These two findings raise intriguing questions concerning the design of head-mounted spatial instruments. The design of such instruments may not only require the introduction of compensatory distortions to remove the neutrally occurring biases but also may significantly benefit from the introduction of artificial distortions which enhance performance. These image manipulations, however, can cause a loss of visual-vestibular coordination and induce motion sickness. Additionally, adaptation to these manipulations is apt to be impaired by computational delays in the image display. Consequently, the design of head-mounted spatial instruments will require an understanding of the tolerable limits of visual-vestibular discord.

  2. New Tools for Viewing Spectrally and Temporally-Rich Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Bradley, E. S.; Toomey, M. P.; Roberts, D. A.; Still, C. J.

    2010-12-01

    High frequency, temporally extensive remote sensing datasets (GOES: 30 minutes, Santa Cruz Island webcam: nearly 5 years at every 10 min.) and airborne imaging spectrometry (AVIRIS with 224 spectral bands), present exciting opportunities for education, synthesis, and analysis. However, the large file volume / size can make holistic review and exploration difficult. In this research, we explore two options for visualization (1) a web-based portal for time-series analysis, PanOpt, and (2) Google Earth-based timestamped image overlays. PanOpt is an interactive website (http://zulu.geog.ucsb.edu/panopt/), which integrates high frequency (GOES) and multispectral (MODIS) satellite imagery with webcam ground-based repeat photography. Side-by-side comparison of satellite imagery with webcam images supports analysis of atmospheric and environmental phenomena. In this proof of concept, we have integrated four years of imagery for a multi-view FogCam on Santa Cruz Island off the coast of Southern California with two years of GOES-11 and four years of MODIS Aqua imagery subsets for the area (14,000 km2). From the PHP-based website, users can search the data (date, time of day, etc.) and specify timestep and display size; and then view the image stack as animations or in a matrix form. Extracted metrics for regions of interest (ROIs) can be viewed in different formats, including time-series and scatter plots. Through click and mouseover actions over the hyperlink-enabled data points, users can view the corresponding images. This directly melds the quantitative and qualitative aspects and could be particularly effective for both education as well as anomaly interpretation. We have also extended this project to Google Earth with timestamped GOES and MODIS image overlays, which can be controlled using the temporal slider and linked to a screen chart of ancillary meteorological data. The automated ENVI/IDL script for generating KMZ overlays was also applied for generating same-day visualization of AVIRIS acquisitions as part of the Gulf of Mexico oil spill response. This supports location-focused imagery review and synthesis, which is critical for successfully imaging moving targets, such as oil slicks.

  3. Development of MPEG standards for 3D and free viewpoint video

    NASA Astrophysics Data System (ADS)

    Smolic, Aljoscha; Kimata, Hideaki; Vetro, Anthony

    2005-11-01

    An overview of 3D and free viewpoint video is given in this paper with special focus on related standardization activities in MPEG. Free viewpoint video allows the user to freely navigate within real world visual scenes, as known from virtual worlds in computer graphics. Suitable 3D scene representation formats are classified and the processing chain is explained. Examples are shown for image-based and model-based free viewpoint video systems, highlighting standards conform realization using MPEG-4. Then the principles of 3D video are introduced providing the user with a 3D depth impression of the observed scene. Example systems are described again focusing on their realization based on MPEG-4. Finally multi-view video coding is described as a key component for 3D and free viewpoint video systems. MPEG is currently working on a new standard for multi-view video coding. The conclusion is that the necessary technology including standard media formats for 3D and free viewpoint is available or will be available in the near future, and that there is a clear demand from industry and user side for such applications. 3DTV at home and free viewpoint video on DVD will be available soon, and will create huge new markets.

  4. Integrating Satellite, Radar and Surface Observation with Time and Space Matching

    NASA Astrophysics Data System (ADS)

    Ho, Y.; Weber, J.

    2015-12-01

    The Integrated Data Viewer (IDV) from Unidata is a Java™-based software framework for analyzing and visualizing geoscience data. It brings together the ability to display and work with satellite imagery, gridded data, surface observations, balloon soundings, NWS WSR-88D Level II and Level III RADAR data, and NOAA National Profiler Network data, all within a unified interface. Applying time and space matching on the satellite, radar and surface observation datasets will automatically synchronize the display from different data sources and spatially subset to match the display area in the view window. These features allow the IDV users to effectively integrate these observations and provide 3 dimensional views of the weather system to better understand the underlying dynamics and physics of weather phenomena.

  5. Geospatial Visualization of Scientific Data Through Keyhole Markup Language

    NASA Astrophysics Data System (ADS)

    Wernecke, J.; Bailey, J. E.

    2008-12-01

    The development of virtual globes has provided a fun and innovative tool for exploring the surface of the Earth. However, it has been the paralleling maturation of Keyhole Markup Language (KML) that has created a new medium and perspective through which to visualize scientific datasets. Originally created by Keyhole Inc., and then acquired by Google in 2004, in 2007 KML was given over to the Open Geospatial Consortium (OGC). It became an OGC international standard on 14 April 2008, and has subsequently been adopted by all major geobrowser developers (e.g., Google, Microsoft, ESRI, NASA) and many smaller ones (e.g., Earthbrowser). By making KML a standard at a relatively young stage in its evolution, developers of the language are seeking to avoid the issues that plagued the early World Wide Web and development of Hypertext Markup Language (HTML). The popularity and utility of Google Earth, in particular, has been enhanced by KML features such as the Smithsonian volcano layer and the dynamic weather layers. Through KML, users can view real-time earthquake locations (USGS), view animations of polar sea-ice coverage (NSIDC), or read about the daily activities of chimpanzees (Jane Goodall Institute). Perhaps even more powerful is the fact that any users can create, edit, and share their own KML, with no or relatively little knowledge of manipulating computer code. We present an overview of the best current scientific uses of KML and a guide to how scientists can learn to use KML themselves.

  6. Immersive virtual reality for visualization of abdominal CT

    NASA Astrophysics Data System (ADS)

    Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A.; Bodenheimer, Robert E.

    2013-03-01

    Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

  7. Immersive Virtual Reality for Visualization of Abdominal CT.

    PubMed

    Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A; Bodenheimer, Robert E

    2013-03-28

    Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two-dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

  8. Experiments on Auditory-Visual Perception of Sentences by Users of Unilateral, Bimodal, and Bilateral Cochlear Implants

    ERIC Educational Resources Information Center

    Dorman, Michael F.; Liss, Julie; Wang, Shuai; Berisha, Visar; Ludwig, Cimarron; Natale, Sarah Cook

    2016-01-01

    Purpose: Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear implants (CIs). Method: Sentence material was presented in auditory (A), visual (V), and AV test conditions to listeners with normal hearing and CI users. Results: (a) Most CI users report that most of the time, they have access to both A and V…

  9. Executive function deficits in short-term abstinent cannabis users.

    PubMed

    McHale, Sue; Hunt, Nigel

    2008-07-01

    Few cognitive tasks are adequately sensitive to show the small decrements in performance in abstinent chronic cannabis users. In this series of three experiments we set out to demonstrate a variety of tasks that are sufficiently sensitive to show differences in visual memory, verbal memory, everyday memory and executive function between controls and cannabis users. A series of three studies explored cognitive function deficits in cannabis users (phonemic verbal fluency, visual recognition and immediate and delayed recall, and prospective memory) in short-term abstinent cannabis users. Participants were selected using snowball sampling, with cannabis users being compared to a standard control group and a tobacco-use control group. The cannabis users, compared to both control groups, had deficits on verbal fluency, visual recognition, delayed visual recall, and short- and long-interval prospective memory. There were no differences for immediate visual recall. These findings suggest that cannabis use leads to impaired executive function. Further research needs to explore the longer term impact of cannabis use. Copyright 2008 John Wiley & Sons, Ltd.

  10. An electrooculogram-based binary saccade sequence classification (BSSC) technique for augmentative communication and control.

    PubMed

    Keegan, Johnalan; Burke, Edward; Condron, James

    2009-01-01

    In the field of assistive technology, the electrooculogram (EOG) can be used as a channel of communication and the basis of a man-machine interface. For many people with severe motor disabilities, simple actions such as changing the TV channel require assistance. This paper describes a method of detecting saccadic eye movements and the use of a saccade sequence classification algorithm to facilitate communication and control. Saccades are fast eye movements that occurs when a person's gaze jumps from one fixation point to another. The classification is based on pre-defined sequences of saccades, guided by a static visual template (e.g. a page or poster). The template, consisting of a table of symbols each having a clearly identifiable fixation point, is situated within view of the user. To execute a particular command, the user moves his or her gaze through a pre-defined path of eye movements. This results in a well-formed sequence of saccades which are translated into a command if a match is found in a library of predefined sequences. A coordinate transformation algorithm is applied to each candidate sequence of recorded saccades to mitigate the effect of changes in the user's position and orientation relative to the visual template. Upon recognition of a saccade sequence from the library, its associated command is executed. A preliminary experiment in which two subjects were instructed to perform a series of command sequences consisting of 8 different commands are presented in the final sections. The system is also shown to be extensible to facilitate convenient text entry via an alphabetic visual template.

  11. Tutorial on Protein Ontology Resources

    PubMed Central

    Arighi, Cecilia; Drabkin, Harold; Christie, Karen R.; Ross, Karen; Natale, Darren

    2017-01-01

    The Protein Ontology (PRO) is the reference ontology for proteins in the Open Biomedical Ontologies (OBO) foundry and consists of three sub-ontologies representing protein classes of homologous genes, proteoforms (e.g., splice isoforms, sequence variants, and post-translationally modified forms), and protein complexes. PRO defines classes of proteins and protein complexes, both species-specific and species non-specific, and indicates their relationships in a hierarchical framework, supporting accurate protein annotation at the appropriate level of granularity, analyses of protein conservation across species, and semantic reasoning. In this first section of this chapter, we describe the PRO framework including categories of PRO terms and the relationship of PRO to other ontologies and protein resources. Next, we provide a tutorial about the PRO website (proconsortium.org) where users can browse and search the PRO hierarchy, view reports on individual PRO terms, and visualize relationships among PRO terms in a hierarchical table view, a multiple sequence alignment view, and a Cytoscape network view. Finally, we describe several examples illustrating the unique and rich information available in PRO. PMID:28150233

  12. A Visual Editor in Java for View

    NASA Technical Reports Server (NTRS)

    Stansifer, Ryan

    2000-01-01

    In this project we continued the development of a visual editor in the Java programming language to create screens on which to display real-time data. The data comes from the numerous systems monitoring the operation of the space shuttle while on the ground and in space, and from the many tests of subsystems. The data can be displayed on any computer platform running a Java-enabled World Wide Web (WWW) browser and connected to the Internet. Previously a special-purpose program bad been written to display data on emulations of character-based display screens used for many years at NASA. The goal now is to display bit-mapped screens created by a visual editor. We report here on the visual editor that creates the display screens. This project continues the work we bad done previously. Previously we had followed the design of the 'beanbox,' a prototype visual editor created by Sun Microsystems. We abandoned this approach and implemented a prototype using a more direct approach. In addition, our prototype is based on newly released Java 2 graphical user interface (GUI) libraries. The result has been a visually more appealing appearance and a more robust application.

  13. A results-based process for evaluation of diverse visual analytics tools

    NASA Astrophysics Data System (ADS)

    Rubin, Gary; Berger, David H.

    2013-05-01

    With the pervasiveness of still and full-motion imagery in commercial and military applications, the need to ingest and analyze these media has grown rapidly in recent years. Additionally, video hosting and live camera websites provide a near real-time view of our changing world with unprecedented spatial coverage. To take advantage of these controlled and crowd-sourced opportunities, sophisticated visual analytics (VA) tools are required to accurately and efficiently convert raw imagery into usable information. Whether investing in VA products or evaluating algorithms for potential development, it is important for stakeholders to understand the capabilities and limitations of visual analytics tools. Visual analytics algorithms are being applied to problems related to Intelligence, Surveillance, and Reconnaissance (ISR), facility security, and public safety monitoring, to name a few. The diversity of requirements means that a onesize- fits-all approach to performance assessment will not work. We present a process for evaluating the efficacy of algorithms in real-world conditions, thereby allowing users and developers of video analytics software to understand software capabilities and identify potential shortcomings. The results-based approach described in this paper uses an analysis of end-user requirements and Concept of Operations (CONOPS) to define Measures of Effectiveness (MOEs), test data requirements, and evaluation strategies. We define metrics that individually do not fully characterize a system, but when used together, are a powerful way to reveal both strengths and weaknesses. We provide examples of data products, such as heatmaps, performance maps, detection timelines, and rank-based probability-of-detection curves.

  14. VPV--The velocity profile viewer user manual

    USGS Publications Warehouse

    Donovan, John M.

    2004-01-01

    The Velocity Profile Viewer (VPV) is a tool for visualizing time series of velocity profiles developed by the U.S. Geological Survey (USGS). The USGS uses VPV to preview and present measured velocity data from acoustic Doppler current profilers and simulated velocity data from three-dimensional estuarine, river, and lake hydrodynamic models. The data can be viewed as an animated three-dimensional profile or as a stack of time-series graphs that each represents a location in the water column. The graphically displayed data are shown at each time step like frames of animation. The animation can play at several different speeds or can be suspended on one frame. The viewing angle and time can be manipulated using mouse interaction. A number of options control the appearance of the profile and the graphs. VPV cannot edit or save data, but it can create a Post-Script file showing the velocity profile in three dimensions. This user manual describes how to use each of these features. VPV is available and can be downloaded for free from the World Wide Web at http://ca.water.usgs.gov/program/sfbay/vpv.

  15. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.

    PubMed

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A; Wetzstein, Gordon

    2017-02-28

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  16. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    NASA Astrophysics Data System (ADS)

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A.; Wetzstein, Gordon

    2017-02-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  17. Power Mobility with Collision Avoidance for Older Adults: User, Caregiver and Prescriber Perspectives

    PubMed Central

    Wang, Rosalie H; Korotchenko, Alexandra; Clarke, Laura Hurd; Ben Mortenson, W; Mihailidis, Alex

    2017-01-01

    Collision avoidance technology has the capacity to facilitate safer mobility among older power mobility users with physical, sensory and cognitive impairments, thus enabling independence for more potential users. However, little is known about consumers’ perceptions of collision avoidance. This article draws on interviews with 29 users, five caregivers, and 10 prescribers to examine views on the design and utilization of this technology. Data analysis identified three themes: “useful situations or contexts”, “technology design issues and real life application”, and “appropriateness of collision avoidance technology for a variety of users”. Findings support the ongoing development of collision avoidance for older adult users. The majority of participants were supportive of the technology, and felt that it might benefit current power mobility users and users with visual impairments, but might be unsuitable for people with significant cognitive impairments. Some participants voiced concerns regarding the risk for injury with power mobility use and some identified situations where collision avoidance might be beneficial (driving backwards, avoiding dynamic obstacles, negotiating outdoor barriers, and learning power mobility use). Design issues include the need for context awareness, reliability, and user interface specifications. Furthermore, user desire to maintain driving autonomy indicates the need to develop collaboratively-controlled systems. This research lays the groundwork for future development by identifying and illustrating consumer needs for this technology. PMID:24458968

  18. Cell Phones, Tablets, and Other Mobile Technology for Users with Visual Impairments

    MedlinePlus

    ... Visual Impairments Cell Phones, Tablets, and Other Mobile Technology for Users with Visual Impairments The Mobile Revolution ... 223 Likes) Cell Phones, Tablets, and Other Mobile Technology Touchscreen Smartphone Accessibility for People with Visual Impairments ...

  19. Systems Engineering Model and Training Application for Desktop Environment

    NASA Technical Reports Server (NTRS)

    May, Jeffrey T.

    2010-01-01

    Provide a graphical user interface based simulator for desktop training, operations and procedure development and system reference. This simulator allows for engineers to train and further understand the dynamics of their system from their local desktops. It allows the users to train and evaluate their system at a pace and skill level based on the user's competency and from a perspective based on the user's need. The simulator will not require any special resources to execute and should generally be available for use. The interface is based on a concept of presenting the model of the system in ways that best suits the user's application or training needs. The three levels of views are Component View, the System View (overall system), and the Console View (monitor). These views are portals into a single model, so changing the model from one view or from a model manager Graphical User Interface will be reflected on all other views.

  20. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2010-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Steven Karl; Day, Christy M.; Determan, John C.

    LANL has developed a process to generate a progressive family of system models for a fissile solution system. This family includes a dynamic system simulation comprised of coupled nonlinear differential equations describing the time evolution of the system. Neutron kinetics, radiolytic gas generation and transport, and core thermal hydraulics are included in the DSS. Extensions to explicit operation of cooling loops and radiolytic gas handling are embedded in these systems as is a stability model. The DSS may then be converted to an implementation in Visual Studio to provide a design team the ability to rapidly estimate system performance impactsmore » from a variety of design decisions. This provides a method to assist in optimization of the system design. Once design has been generated in some detail the C++ version of the system model may then be implemented in a LabVIEW user interface to evaluate operator controls and instrumentation and operator recognition and response to off-normal events. Taken as a set of system models the DSS, Visual Studio, and LabVIEW progression provides a comprehensive set of design support tools.« less

  2. 3DProIN: Protein-Protein Interaction Networks and Structure Visualization.

    PubMed

    Li, Hui; Liu, Chunmei

    2014-06-14

    3DProIN is a computational tool to visualize protein-protein interaction networks in both two dimensional (2D) and three dimensional (3D) view. It models protein-protein interactions in a graph and explores the biologically relevant features of the tertiary structures of each protein in the network. Properties such as color, shape and name of each node (protein) of the network can be edited in either 2D or 3D views. 3DProIN is implemented using 3D Java and C programming languages. The internet crawl technique is also used to parse dynamically grasped protein interactions from protein data bank (PDB). It is a java applet component that is embedded in the web page and it can be used on different platforms including Linux, Mac and Window using web browsers such as Firefox, Internet Explorer, Chrome and Safari. It also was converted into a mac app and submitted to the App store as a free app. Mac users can also download the app from our website. 3DProIN is available for academic research at http://bicompute.appspot.com.

  3. The Lunar Mapping and Modeling Portal: Capabilities and Lunar Data Products to support Return to the Moon

    NASA Astrophysics Data System (ADS)

    Law, E.; Bui, B.; Chang, G.; Goodale, C. E.; Kim, R.; Malhotra, S.; Ramirez, P.; Rodriguez, L.; Sadaqathulla, S.; Nall, M.; Muery, K.

    2012-12-01

    The Lunar Mapping and Modeling Portal (LMMP), is a multi-center project led by NASA's Marshall Space Flight Center. The LMMP is a web-based Portal and a suite of interactive visualization and analysis tools to enable lunar scientists, engineers, and mission planners to access mapped lunar data products from past and current lunar missions, e.g., Lunar Reconnaissance Orbiter, Apollo, Lunar Orbiter, Lunar Prospector, and Clementine. The Portal allows users to search, view and download a vast number of the most recent lunar digital products including image mosaics, digital elevation models, and in situ lunar resource maps such as iron and hydrogen abundance. The Portal also provides a number of visualization and analysis tools that perform lighting analysis and local hazard assessments, such as, slope, surface roughness and crater/boulder distribution. In this talk, we will give a brief overview of the project. After that, we will highlight various key features and Lunar data products. We will further demonstrate image viewing and layering of lunar map images via our web portal as well as mobile devices.

  4. Ergonomic approaches to designing educational materials for immersive multi-projection system

    NASA Astrophysics Data System (ADS)

    Shibata, Takashi; Lee, JaeLin; Inoue, Tetsuri

    2014-02-01

    Rapid advances in computer and display technologies have made it possible to present high quality virtual reality (VR) environment. To use such virtual environments effectively, research should be performed into how users perceive and react to virtual environment in view of particular human factors. We created a VR simulation of sea fish for science education, and we conducted an experiment to examine how observers perceive the size and depth of an object within their reach and evaluated their visual fatigue. We chose a multi-projection system for presenting the educational VR simulation, because this system can provide actual-size objects and produce stereo images located close to the observer. The results of the experiment show that estimation of size and depth was relatively accurate when subjects used physical actions to assess them. Presenting images within the observer's reach is suggested to be useful for education in VR environment. Evaluation of visual fatigue shows that the level of symptoms from viewing stereo images with a large disparity in VR environment was low in a short time.

  5. Google Sky: A Digital View of the Night Sky

    NASA Astrophysics Data System (ADS)

    Connolly, A. Scranton, R.; Ornduff, T.

    2008-11-01

    From its inception Astronomy has been a visual science, from careful observations of the sky using the naked eye, to the use of telescopes and photographs to map the distribution of stars and galaxies, to the current era of digital cameras that can image the sky over many decades of the electromagnetic spectrum. Sky in Google Earth (http://earth.google.com) and Google Sky (http://www.google.com/sky) continue this tradition, providing an intuitive visual interface to some of the largest astronomical imaging surveys of the sky. Streaming multi-color imagery, catalogs, time domain data, as well as annotating interesting astronomical sources and events with placemarks, podcasts and videos, Sky provides a panchromatic view of the universe accessible to anyone with a computer. Beyond a simple exploration of the sky Google Sky enables users to create and share content with others around the world. With an open interface available on Linux, Mac OS X and Windows, and translations of the content into over 20 different languages we present Sky as the embodiment of a virtual telescope for discovery and sharing the excitement of astronomy and science as a whole.

  6. LC-IM-TOF Instrument Control & Data Visualization Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-05-12

    Liquid Chromatography-Ion Mobility-time of Flight Instrument Control and Data Visualization software is designed to control instrument voltages for the Ion Mobility drift tube. It collects and stores information collected from the Agilent TOF instrument and analyses/displays the ion intensity information acquired. The software interface can be split into 3 categories -- Instrument Settings/Controls, Data Acquisition, and Viewer. The Instrument Settings/Controls prepares the instrument for Data Acquisition. The Viewer contains common objects that are used by Instrument Settings/Controls and Data Acquisition. Intensity information is collected in 1 nanosec bins and separated by TOF pulses called scans. A collection of scans aremore » stored side by side making up an accumulation. In order for the computer to keep up with the stream of data, 30-50 accumulations are commonly summed into a single frame. A collection of frames makes up an experiment. The Viewer software then takes the experiment and presents the data in several possible ways, each frame can be viewed in TOF bins or m/z (mass to charge ratio). The experiment can be viewed frame by frame, merging several frames, or by viewing the peak chromatogram. The user can zoom into the data, export data, and/or animate frames. Additional features include calibration of the data and even post-processing multiplexed data.« less

  7. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    PubMed

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Accessing eSDO Solar Image Processing and Visualization through AstroGrid

    NASA Astrophysics Data System (ADS)

    Auden, E.; Dalla, S.

    2008-08-01

    The eSDO project is funded by the UK's Science and Technology Facilities Council (STFC) to integrate Solar Dynamics Observatory (SDO) data, algorithms, and visualization tools with the UK's Virtual Observatory project, AstroGrid. In preparation for the SDO launch in January 2009, the eSDO team has developed nine algorithms covering coronal behaviour, feature recognition, and global / local helioseismology. Each of these algorithms has been deployed as an AstroGrid Common Execution Architecture (CEA) application so that they can be included in complex VO workflows. In addition, the PLASTIC-enabled eSDO "Streaming Tool" online movie application allows users to search multi-instrument solar archives through AstroGrid web services and visualise the image data through galleries, an interactive movie viewing applet, and QuickTime movies generated on-the-fly.

  9. Arachne—A web-based event viewer for MINERνA

    NASA Astrophysics Data System (ADS)

    Tagg, N.; Brangham, J.; Chvojka, J.; Clairemont, M.; Day, M.; Eberly, B.; Felix, J.; Fields, L.; Gago, A. M.; Gran, R.; Harris, D. A.; Kordosky, M.; Lee, H.; Maggi, G.; Maher, E.; Mann, W. A.; Marshall, C. M.; McFarland, K. S.; McGowan, A. M.; Mislivec, A.; Mousseau, J.; Osmanov, B.; Osta, J.; Paolone, V.; Perdue, G.; Ransome, R. D.; Ray, H.; Schellman, H.; Schmitz, D. W.; Simon, C.; Solano Salinas, C. J.; Tice, B. G.; Walding, J.; Walton, T.; Wolcott, J.; Zhang, D.; Ziemer, B. P.; MinerνA Collaboration

    2012-06-01

    Neutrino interaction events in the MINERνA detector are visually represented with a web-based tool called Arachne. Data are retrieved from a central server via AJAX, and client-side JavaScript draws images into the user's browser window using the draft HTML 5 standard. These technologies allow neutrino interactions to be viewed by anyone with a web browser, allowing for easy hand-scanning of particle interactions. Arachne has been used in MINERνA to evaluate neutrino data in a prototype detector, to tune reconstruction algorithms, and for public outreach and education.

  10. Arachne - A web-based event viewer for MINERvA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tagg, N.; /Otterbein Coll.; Brangham, J.

    2011-11-01

    Neutrino interaction events in the MINERvA detector are visually represented with a web-based tool called Arachne. Data are retrieved from a central server via AJAX, and client-side JavaScript draws images into the user's browser window using the draft HTML 5 standard. These technologies allow neutrino interactions to be viewed by anyone with a web browser, allowing for easy hand-scanning of particle interactions. Arachne has been used in MINERvA to evaluate neutrino data in a prototype detector, to tune reconstruction algorithms, and for public outreach and education.

  11. A virtual reality browser for Space Station models

    NASA Technical Reports Server (NTRS)

    Goldsby, Michael; Pandya, Abhilash; Aldridge, Ann; Maida, James

    1993-01-01

    The Graphics Analysis Facility at NASA/JSC has created a visualization and learning tool by merging its database of detailed geometric models with a virtual reality system. The system allows an interactive walk-through of models of the Space Station and other structures, providing detailed realistic stereo images. The user can activate audio messages describing the function and connectivity of selected components within his field of view. This paper presents the issues and trade-offs involved in the implementation of the VR system and discusses its suitability for its intended purposes.

  12. Users Views about the Usability of Digital Libraries

    ERIC Educational Resources Information Center

    Koohang, Alex; Ondracek, James

    2005-01-01

    This study examined users' views about the usability of digital libraries' current and perceived importance. Age, gender, prior experience with the Internet, college status, and digital library proficiency are the independent variables. Users' current views about the usability of digital libraries and users perceived importance of digital library…

  13. A novel visualization model for web search results.

    PubMed

    Nguyen, Tien N; Zhang, Jin

    2006-01-01

    This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.

  14. StarView: The object oriented design of the ST DADS user interface

    NASA Technical Reports Server (NTRS)

    Williams, J. D.; Pollizzi, J. A.

    1992-01-01

    StarView is the user interface being developed for the Hubble Space Telescope Data Archive and Distribution Service (ST DADS). ST DADS is the data archive for HST observations and a relational database catalog describing the archived data. Users will use StarView to query the catalog and select appropriate datasets for study. StarView sends requests for archived datasets to ST DADS which processes the requests and returns the database to the user. StarView is designed to be a powerful and extensible user interface. Unique features include an internal relational database to navigate query results, a form definition language that will work with both CRT and X interfaces, a data definition language that will allow StarView to work with any relational database, and the ability to generate adhoc queries without requiring the user to understand the structure of the ST DADS catalog. Ultimately, StarView will allow the user to refine queries in the local database for improved performance and merge in data from external sources for correlation with other query results. The user will be able to create a query from single or multiple forms, merging the selected attributes into a single query. Arbitrary selection of attributes for querying is supported. The user will be able to select how query results are viewed. A standard form or table-row format may be used. Navigation capabilities are provided to aid the user in viewing query results. Object oriented analysis and design techniques were used in the design of StarView to support the mechanisms and concepts required to implement these features. One such mechanism is the Model-View-Controller (MVC) paradigm. The MVC allows the user to have multiple views of the underlying database, while providing a consistent mechanism for interaction regardless of the view. This approach supports both CRT and X interfaces while providing a common mode of user interaction. Another powerful abstraction is the concept of a Query Model. This concept allows a single query to be built form a single or multiple forms before it is submitted to ST DADS. Supporting this concept is the adhoc query generator which allows the user to select and qualify an indeterminate number attributes from the database. The user does not need any knowledge of how the joins across various tables are to be resolved. The adhoc generator calculates the joins automatically and generates the correct SQL query.

  15. Visual Analytics 101

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.

    This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics

  16. Escher: A Web Application for Building, Sharing, and Embedding Data-Rich Visualizations of Biological Pathways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Zachary A.; Drager, Andreas; Ebrahim, Ali

    Escher is a web application for visualizing data on biological pathways. Three key features make Escher a uniquely effective tool for pathway visualization. First, users can rapidly design new pathway maps. Escher provides pathway suggestions based on user data and genome-scale models, so users can draw pathways in a semi-automated way. Second, users can visualize data related to genes or proteins on the associated reactions and pathways, using rules that define which enzymes catalyze each reaction. Thus, users can identify trends in common genomic data types (e.g. RNA-Seq, proteomics, ChIP)—in conjunction with metabolite- and reaction-oriented data types (e.g. metabolomics, fluxomics).more » Third, Escher harnesses the strengths of web technologies (SVG, D3, developer tools) so that visualizations can be rapidly adapted, extended, shared, and embedded. This paper provides examples of each of these features and explains how the development approach used for Escher can be used to guide the development of future visualization tools.« less

  17. Escher: A Web Application for Building, Sharing, and Embedding Data-Rich Visualizations of Biological Pathways

    PubMed Central

    King, Zachary A.; Dräger, Andreas; Ebrahim, Ali; Sonnenschein, Nikolaus; Lewis, Nathan E.; Palsson, Bernhard O.

    2015-01-01

    Escher is a web application for visualizing data on biological pathways. Three key features make Escher a uniquely effective tool for pathway visualization. First, users can rapidly design new pathway maps. Escher provides pathway suggestions based on user data and genome-scale models, so users can draw pathways in a semi-automated way. Second, users can visualize data related to genes or proteins on the associated reactions and pathways, using rules that define which enzymes catalyze each reaction. Thus, users can identify trends in common genomic data types (e.g. RNA-Seq, proteomics, ChIP)—in conjunction with metabolite- and reaction-oriented data types (e.g. metabolomics, fluxomics). Third, Escher harnesses the strengths of web technologies (SVG, D3, developer tools) so that visualizations can be rapidly adapted, extended, shared, and embedded. This paper provides examples of each of these features and explains how the development approach used for Escher can be used to guide the development of future visualization tools. PMID:26313928

  18. Escher: A Web Application for Building, Sharing, and Embedding Data-Rich Visualizations of Biological Pathways

    DOE PAGES

    King, Zachary A.; Drager, Andreas; Ebrahim, Ali; ...

    2015-08-27

    Escher is a web application for visualizing data on biological pathways. Three key features make Escher a uniquely effective tool for pathway visualization. First, users can rapidly design new pathway maps. Escher provides pathway suggestions based on user data and genome-scale models, so users can draw pathways in a semi-automated way. Second, users can visualize data related to genes or proteins on the associated reactions and pathways, using rules that define which enzymes catalyze each reaction. Thus, users can identify trends in common genomic data types (e.g. RNA-Seq, proteomics, ChIP)—in conjunction with metabolite- and reaction-oriented data types (e.g. metabolomics, fluxomics).more » Third, Escher harnesses the strengths of web technologies (SVG, D3, developer tools) so that visualizations can be rapidly adapted, extended, shared, and embedded. This paper provides examples of each of these features and explains how the development approach used for Escher can be used to guide the development of future visualization tools.« less

  19. A Visual Analytics Approach for Station-Based Air Quality Data

    PubMed Central

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-01-01

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117

  20. A Visual Analytics Approach for Station-Based Air Quality Data.

    PubMed

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-12-24

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.

  1. Fluidica CFD software for fluids instruction

    NASA Astrophysics Data System (ADS)

    Colonius, Tim

    2008-11-01

    Fluidica is an open-source freely available Matlab graphical user interface (GUI) to to an immersed-boundary Navier- Stokes solver. The algorithm is programmed in Fortran and compiled into Matlab as mex-function. The user can create external flows about arbitrarily complex bodies and collections of free vortices. The code runs fast enough for complex 2D flows to be computed and visualized in real-time on the screen. This facilitates its use in homework and in the classroom for demonstrations of various potential-flow and viscous flow phenomena. The GUI has been written with the goal of allowing the student to learn how to use the software as she goes along. The user can select which quantities are viewed on the screen, including contours of various scalars, velocity vectors, streamlines, particle trajectories, streaklines, and finite-time Lyapunov exponents. In this talk, we demonstrate the software in the context of worked classroom examples demonstrating lift and drag, starting vortices, separation, and vortex dynamics.

  2. Physiological approach to optimal stereographic game programming: a technical guide

    NASA Astrophysics Data System (ADS)

    Martens, William L.; McRuer, Robert; Childs, C. Timothy; Viirree, Erik

    1996-04-01

    With the advent of mass distribution of consumer VR games comes an imperative to set health and safety standards for the hardware and software used to deliver stereographic content. This is particularly important for game developers who intend to present this stereographic content via head-mounted display (HMD). The visual discomfort that is commonly reported by the user of HMD-based VR games presumably could be kept to a minimum if game developers were provided with standards for the display of stereographic imagery. In this paper, we draw upon both results of research in binocular vision and practical methods from clinical optometry to develop some technical guidelines for programming stereographic games that have the end user's comfort and safety in mind. This paper will provide generate strategies for user- centered implementation of 3D virtual worlds, as well as pictorial examples demonstrating a natural means for rendering stereographic imagery more comfortable to view in games employing first-person perspective.

  3. GPU-Based Interactive Exploration and Online Probability Maps Calculation for Visualizing Assimilated Ocean Ensembles Data

    NASA Astrophysics Data System (ADS)

    Hoteit, I.; Hollt, T.; Hadwiger, M.; Knio, O. M.; Gopalakrishnan, G.; Zhan, P.

    2016-02-01

    Ocean reanalyses and forecasts are nowadays generated by combining ensemble simulations with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. We present an approach using probability-weighted piecewise particle trajectories to allow for interactive probability mapping. This is achieved by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next cycle. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates. The technique is integrated in an interactive visualization system that enables the visual analysis of the particle traces side by side with other forecast variables, such as the sea surface height, and their corresponding behavior over time. By harnessing the power of modern graphics processing units (GPUs) for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real-time, view specific parameter settings or simulation models and move between different spatial or temporal regions without delay. In addition our system provides advanced visualizations to highlight the uncertainty, or show the complete distribution of the simulations at user-defined positions over the complete time series of the domain.

  4. Virtual reality hardware for use in interactive 3D data fusion and visualization

    NASA Astrophysics Data System (ADS)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.

  5. The streams of our lives: visualizing listening histories in context.

    PubMed

    Baur, Dominikus; Seiffert, Frederik; Sedlmair, Michael; Boring, Sebastian

    2010-01-01

    The choices we take when listening to music are expressions of our personal taste and character. Storing and accessing our listening histories is trivial due to services like Last.fm, but learning from them and understanding them is not. Existing solutions operate at a very abstract level and only produce statistics. By applying techniques from information visualization to this problem, we were able to provide average people with a detailed and powerful tool for accessing their own musical past. LastHistory is an interactive visualization for displaying music listening histories, along with contextual information from personal photos and calendar entries. Its two main user tasks are (1) analysis, with an emphasis on temporal patterns and hypotheses related to musical genre and sequences, and (2) reminiscing, where listening histories and context represent part of one's past. In this design study paper we give an overview of the field of music listening histories and explain their unique characteristics as a type of personal data. We then describe the design rationale, data and view transformations of LastHistory and present the results from both a lab- and a large-scale online study. We also put listening histories in contrast to other lifelogging data. The resonant and enthusiastic feedback that we received from average users shows a need for making their personal data accessible. We hope to stimulate such developments through this research.

  6. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  7. Using Tablet for visual exploration of second-generation sequencing data.

    PubMed

    Milne, Iain; Stephen, Gordon; Bayer, Micha; Cock, Peter J A; Pritchard, Leighton; Cardle, Linda; Shaw, Paul D; Marshall, David

    2013-03-01

    The advent of second-generation sequencing (2GS) has provided a range of significant new challenges for the visualization of sequence assemblies. These include the large volume of data being generated, short-read lengths and different data types and data formats associated with the diversity of new sequencing technologies. This article illustrates how Tablet-a high-performance graphical viewer for visualization of 2GS assemblies and read mappings-plays an important role in the analysis of these data. We present Tablet, and through a selection of use cases, demonstrate its value in quality assurance and scientific discovery, through features such as whole-reference coverage overviews, variant highlighting, paired-end read mark-up, GFF3-based feature tracks and protein translations. We discuss the computing and visualization techniques utilized to provide a rich and responsive graphical environment that enables users to view a range of file formats with ease. Tablet installers can be freely downloaded from http://bioinf.hutton.ac.uk/tablet in 32 or 64-bit versions for Windows, OS X, Linux or Solaris. For further details on the Tablet, contact tablet@hutton.ac.uk.

  8. Route visualization using detail lenses.

    PubMed

    Karnick, Pushpak; Cline, David; Jeschke, Stefan; Razdan, Anshuman; Wonka, Peter

    2010-01-01

    We present a method designed to address some limitations of typical route map displays of driving directions. The main goal of our system is to generate a printable version of a route map that shows the overview and detail views of the route within a single, consistent visual frame. Our proposed visualization provides a more intuitive spatial context than a simple list of turns. We present a novel multifocus technique to achieve this goal, where the foci are defined by points of interest (POI) along the route. A detail lens that encapsulates the POI at a finer geospatial scale is created for each focus. The lenses are laid out on the map to avoid occlusion with the route and each other, and to optimally utilize the free space around the route. We define a set of layout metrics to evaluate the quality of a lens layout for a given route map visualization. We compare standard lens layout methods to our proposed method and demonstrate the effectiveness of our method in generating aesthetically pleasing layouts. Finally, we perform a user study to evaluate the effectiveness of our layout choices.

  9. How Information Visualization Systems Change Users' Understandings of Complex Data

    ERIC Educational Resources Information Center

    Allendoerfer, Kenneth Robert

    2009-01-01

    User-centered evaluations of information systems often focus on the usability of the system rather its usefulness. This study examined how a using an interactive knowledge-domain visualization (KDV) system affected users' understanding of a domain. Interactive KDVs allow users to create graphical representations of domains that depict important…

  10. Bilingual Cancer Genetic Education Modules for the Deaf Community: Development and Evaluation of the Online Video Material.

    PubMed

    Boudreault, Patrick; Wolfson, Alicia; Berman, Barbara; Venne, Vickie L; Sinsheimer, Janet S; Palmer, Christina

    2018-04-01

    Health information about inherited forms of cancer and the role of family history in cancer risk for the American Sign Language (ASL) Deaf community, a linguistic and cultural community, needs improvement. Cancer genetic education materials available in English print format are not accessible for many sign language users because English is not their native or primary language. Per Center for Disease Control and Prevention recommendations, the level of literacy for printed health education materials should not be higher than 6th grade level (~ 11 to 12 years old), and even with this recommendation, printed materials are still not accessible to sign language users or other nonnative English speakers. Genetic counseling is becoming an integral part of healthcare, but often ASL users are not considered when health education materials are developed. As a result, there are few genetic counseling materials available in ASL. Online tools such as video and closed captioning offer opportunities for educators and genetic counselors to provide digital access to genetic information in ASL to the Deaf community. The Deaf Genetics Project team used a bilingual approach to develop a 37-min interactive Cancer Genetics Education Module (CGEM) video in ASL with closed captions and quizzes, and demonstrated that this approach resulted in greater cancer genetic knowledge and increased intentions to obtain counseling or testing, compared to standard English text information (Palmer et al., Disability and Health Journal, 10(1):23-32, 2017). Though visually enhanced educational materials have been developed for sign language users with multimodal/lingual approach, little is known about design features that can accommodate a diverse audience of sign language users so the material is engaging to a wide audience. The main objectives of this paper are to describe the development of the CGEM and to determine if viewer demographic characteristics are associated with two measurable aspects of CGEM viewing behavior: (1) length of time spent viewing and (2) number of pause, play, and seek events. These objectives are important to address, especially for Deaf individuals because the amount of simultaneous content (video, print) requires cross-modal cognitive processing of visual and textual materials. The use of technology and presentational strategies is needed that enhance and not interfere with health learning in this population.

  11. The impact of online visual on users' motivation and behavioural intention - A comparison between persuasive and non-persuasive visuals

    NASA Astrophysics Data System (ADS)

    Ibrahim, Nurulhuda; Shiratuddin, Mohd Fairuz; Wong, Kok Wai

    2016-08-01

    Research related to the first impression has highlighted the importance of visual appeal in influencing the favourable attitude towards a website. In the perspective of impression formation, it is proposed that the users are actually attracted to certain characteristics or aspects of the visual properties of a website, while ignoring the rests. Therefore, this study aims to investigate which visual strongly appeals to the users by comparing the impact of common visuals with the persuasive visuals. The principles of social influence are proposed as the added value to the persuasiveness of the web visuals. An experimental study is conducted and the PLS-SEM method is employed to analyse the obtained data. The result of the exploratory analyses demonstrated that the structural model has better quality when tested with persuasive data sample compared to non-persuasive data sample, evident with stronger coefficient of determination and path coefficients. Thus, it is concluded that persuasive visual provides better impact towards users' attitude and behavioural intention of a website.

  12. Altered visual perception in long-term ecstasy (MDMA) users.

    PubMed

    White, Claire; Brown, John; Edwards, Mark

    2013-09-01

    The present study investigated the long-term consequences of ecstasy use on visual processes thought to reflect serotonergic functions in the occipital lobe. Evidence indicates that the main psychoactive ingredient in ecstasy (methylendioxymethamphetamine) causes long-term changes to the serotonin system in human users. Previous research has found that amphetamine-abstinent ecstasy users have disrupted visual processing in the occipital lobe which relies on serotonin, with researchers concluding that ecstasy broadens orientation tuning bandwidths. However, other processes may have accounted for these results. The aim of the present research was to determine if amphetamine-abstinent ecstasy users have changes in occipital lobe functioning, as revealed by two studies: a masking study that directly measured the width of orientation tuning bandwidths and a contour integration task that measured the strength of long-range connections in the visual cortex of drug users compared to controls. Participants were compared on the width of orientation tuning bandwidths (26 controls, 12 ecstasy users, 10 ecstasy + amphetamine users) and the strength of long-range connections (38 controls, 15 ecstasy user, 12 ecstasy + amphetamine users) in the occipital lobe. Amphetamine-abstinent ecstasy users had significantly broader orientation tuning bandwidths than controls and significantly lower contour detection thresholds (CDTs), indicating worse performance on the task, than both controls and ecstasy + amphetamine users. These results extend on previous research, which is consistent with the proposal that ecstasy may damage the serotonin system, resulting in behavioral changes on tests of visual perception processes which are thought to reflect serotonergic functions in the occipital lobe.

  13. feedr and animalnexus.ca: A paired R package and user-friendly Web application for transforming and visualizing animal movement data from static stations.

    PubMed

    LaZerte, Stefanie E; Reudink, Matthew W; Otter, Ken A; Kusack, Jackson; Bailey, Jacob M; Woolverton, Austin; Paetkau, Mark; de Jong, Adriaan; Hill, David J

    2017-10-01

    Radio frequency identification (RFID) provides a simple and inexpensive approach for examining the movements of tagged animals, which can provide information on species behavior and ecology, such as habitat/resource use and social interactions. In addition, tracking animal movements is appealing to naturalists, citizen scientists, and the general public and thus represents a tool for public engagement in science and science education. Although a useful tool, the large amount of data collected using RFID may quickly become overwhelming. Here, we present an R package (feedr) we have developed for loading, transforming, and visualizing time-stamped, georeferenced data, such as RFID data collected from static logger stations. Using our package, data can be transformed from raw RFID data to visits, presence (regular detections by a logger over time), movements between loggers, displacements, and activity patterns. In addition, we provide several conversion functions to allow users to format data for use in functions from other complementary R packages. Data can also be visualized through static or interactive maps or as animations over time. To increase accessibility, data can be transformed and visualized either through R directly, or through the companion site: http://animalnexus.ca, an online, user-friendly, R-based Shiny Web application. This system can be used by professional and citizen scientists alike to view and study animal movements. We have designed this package to be flexible and to be able to handle data collected from other stationary sources (e.g., hair traps, static very high frequency (VHF) telemetry loggers, observations of marked individuals in colonies or staging sites), and we hope this framework will become a meeting point for science, education, and community awareness of the movements of animals. We aim to inspire citizen engagement while simultaneously enabling robust scientific analysis.

  14. Early Sign Language Experience Goes Along with an Increased Cross-modal Gain for Affective Prosodic Recognition in Congenitally Deaf CI Users.

    PubMed

    Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte

    2018-04-01

    It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and rely more on the facial cues of audio-visual emotional stimuli. Two groups of young adult CD CI users-early signers (ES CI users; n = 11) and late signers (LS CI users; n = 10)-and a group of hearing, non-signing, age-matched controls (n = 12) performed an emotion recognition task with auditory, visual, and cross-modal emotionally congruent and incongruent speech stimuli. On different trials, participants categorized either the facial or the vocal expressions. The ES CI users more accurately recognized affective prosody than the LS CI users in the presence of congruent facial information. Furthermore, the ES CI users, but not the LS CI users, gained more than the controls from congruent visual stimuli when recognizing affective prosody. Both CI groups performed overall worse than the controls in recognizing affective prosody. These results suggest that early sign language experience affects multisensory emotion perception in CD CI users.

  15. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2011-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.

  16. A methodology for coupling a visual enhancement device to human visual attention

    NASA Astrophysics Data System (ADS)

    Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman

    2009-02-01

    The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.

  17. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE PAGES

    Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...

    2017-02-16

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  18. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A.; Halsey, William; Dehoff, Ryan

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  19. TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.

    PubMed

    Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas

    2017-01-01

    Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.

  20. A Data Model and Task Space for Data of Interest (DOI) Eye-Tracking Analyses.

    PubMed

    Jianu, Radu; Alam, Sayeed Safayet

    2018-03-01

    Eye-tracking data is traditionally analyzed by looking at where on a visual stimulus subjects fixate, or, to facilitate more advanced analyses, by using area-of-interests (AOI) defined onto visual stimuli. Recently, there is increasing interest in methods that capture what users are looking at rather than where they are looking. By instrumenting visualization code that transforms a data model into visual content, gaze coordinates reported by an eye-tracker can be mapped directly to granular data shown on the screen, producing temporal sequences of data objects that subjects viewed in an experiment. Such data collection, which is called gaze to object mapping (GTOM) or data-of-interest analysis (DOI), can be done reliably with limited overhead and can facilitate research workflows not previously possible. Our paper contributes to establishing a foundation of DOI analyses by defining a DOI data model and highlighting its differences to AOI data in structure and scale; by defining and exemplifying a space of DOI enabled tasks; by describing three concrete examples of DOI experimentation in three different domains; and by discussing immediate research challenges in creating a framework of visual support for DOI experimentation and analysis.

  1. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    PubMed

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  2. Visual Reconciliation of Alternative Similarity Spaces in Climate Modeling.

    PubMed

    Poco, Jorge; Dasgupta, Aritra; Wei, Yaxing; Hargrove, William; Schwalm, Christopher R; Huntzinger, Deborah N; Cook, Robert; Bertini, Enrico; Silva, Claudio T

    2014-12-01

    Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses additional challenges. In this paper we define visual reconciliation as the problem of reconciling multiple alternative similarity spaces through visualization and interaction. We derive this problem from our work on model comparison in climate science where climate modelers are faced with the challenge of making sense of alternative ways to describe their models: one through the output they generate, another through the large set of properties that describe them. Ideally, they want to understand whether groups of models with similar spatio-temporal behaviors share similar sets of criteria or, conversely, whether similar criteria lead to similar behaviors. We propose a visual analytics solution based on linked views, that addresses this problem by allowing the user to dynamically create, modify and observe the interaction among groupings, thereby making the potential explanations apparent. We present case studies that demonstrate the usefulness of our technique in the area of climate science.

  3. Visualization of particle interactions in granular media.

    PubMed

    Meier, Holger A; Schlemmer, Michael; Wagner, Christian; Kerren, Andreas; Hagen, Hans; Kuhl, Ellen; Steinmann, Paul

    2008-01-01

    Interaction between particles in so-called granular media, such as soil and sand, plays an important role in the context of geomechanical phenomena and numerous industrial applications. A two scale homogenization approach based on a micro and a macro scale level is briefly introduced in this paper. Computation of granular material in such a way gives a deeper insight into the context of discontinuous materials and at the same time reduces the computational costs. However, the description and the understanding of the phenomena in granular materials are not yet satisfactory. A sophisticated problem-specific visualization technique would significantly help to illustrate failure phenomena on the microscopic level. As main contribution, we present a novel 2D approach for the visualization of simulation data, based on the above outlined homogenization technique. Our visualization tool supports visualization on micro scale level as well as on macro scale level. The tool shows both aspects closely arranged in form of multiple coordinated views to give users the possibility to analyze the particle behavior effectively. A novel type of interactive rose diagrams was developed to represent the dynamic contact networks on the micro scale level in a condensed and efficient way.

  4. Enhancing multi-view autostereoscopic displays by viewing distance control (VDC)

    NASA Astrophysics Data System (ADS)

    Jurk, Silvio; Duckstein, Bernd; Renault, Sylvain; Kuhlmey, Mathias; de la Barré, René; Ebner, Thomas

    2014-03-01

    Conventional multi-view displays spatially interlace various views of a 3D scene and form appropriate viewing channels. However, they only support sufficient stereo quality within a limited range around the nominal viewing distance (NVD). If this distance is maintained, two slightly divergent views are projected to the person's eyes, both covering the entire screen. With increasing deviations from the NVD the stereo image quality decreases. As a major drawback in usability, the manufacturer so far assigns this distance. We propose a software-based solution that corrects false view assignments depending on the distance of the viewer. Our novel approach enables continuous view adaptation based on the calculation of intermediate views and a column-bycolumn rendering method. The algorithm controls each individual subpixel and generates a new interleaving pattern from selected views. In addition, we use color-coded test content to verify its efficacy. This novel technology helps shifting the physically determined NVD to a user-defined distance thereby supporting stereopsis. The recent viewing positions can fall in front or behind the NVD of the original setup. Our algorithm can be applied to all multi-view autostereoscopic displays — independent of the ascent or the periodicity of the optical element. In general, the viewing distance can be corrected with a factor of more than 2.5. By creating a continuous viewing area the visualized 3D content is suitable even for persons with largely divergent intraocular distance — adults and children alike — without any deficiency in spatial perception.

  5. Clickstream data yields high-resolution maps of science.

    PubMed

    Bollen, Johan; Van de Sompel, Herbert; Hagberg, Aric; Bettencourt, Luis; Chute, Ryan; Rodriguez, Marko A; Balakireva, Lyudmila

    2009-01-01

    Intricate maps of science have been created from citation data to visualize the structure of scientific activity. However, most scientific publications are now accessed online. Scholarly web portals record detailed log data at a scale that exceeds the number of all existing citations combined. Such log data is recorded immediately upon publication and keeps track of the sequences of user requests (clickstreams) that are issued by a variety of users across many different domains. Given these advantages of log datasets over citation data, we investigate whether they can produce high-resolution, more current maps of science. Over the course of 2007 and 2008, we collected nearly 1 billion user interactions recorded by the scholarly web portals of some of the most significant publishers, aggregators and institutional consortia. The resulting reference data set covers a significant part of world-wide use of scholarly web portals in 2006, and provides a balanced coverage of the humanities, social sciences, and natural sciences. A journal clickstream model, i.e. a first-order Markov chain, was extracted from the sequences of user interactions in the logs. The clickstream model was validated by comparing it to the Getty Research Institute's Architecture and Art Thesaurus. The resulting model was visualized as a journal network that outlines the relationships between various scientific domains and clarifies the connection of the social sciences and humanities to the natural sciences. Maps of science resulting from large-scale clickstream data provide a detailed, contemporary view of scientific activity and correct the underrepresentation of the social sciences and humanities that is commonly found in citation data.

  6. Clickstream Data Yields High-Resolution Maps of Science

    PubMed Central

    Bollen, Johan; Van de Sompel, Herbert; Rodriguez, Marko A.; Balakireva, Lyudmila

    2009-01-01

    Background Intricate maps of science have been created from citation data to visualize the structure of scientific activity. However, most scientific publications are now accessed online. Scholarly web portals record detailed log data at a scale that exceeds the number of all existing citations combined. Such log data is recorded immediately upon publication and keeps track of the sequences of user requests (clickstreams) that are issued by a variety of users across many different domains. Given these advantages of log datasets over citation data, we investigate whether they can produce high-resolution, more current maps of science. Methodology Over the course of 2007 and 2008, we collected nearly 1 billion user interactions recorded by the scholarly web portals of some of the most significant publishers, aggregators and institutional consortia. The resulting reference data set covers a significant part of world-wide use of scholarly web portals in 2006, and provides a balanced coverage of the humanities, social sciences, and natural sciences. A journal clickstream model, i.e. a first-order Markov chain, was extracted from the sequences of user interactions in the logs. The clickstream model was validated by comparing it to the Getty Research Institute's Architecture and Art Thesaurus. The resulting model was visualized as a journal network that outlines the relationships between various scientific domains and clarifies the connection of the social sciences and humanities to the natural sciences. Conclusions Maps of science resulting from large-scale clickstream data provide a detailed, contemporary view of scientific activity and correct the underrepresentation of the social sciences and humanities that is commonly found in citation data. PMID:19277205

  7. From Information Management to Information Visualization

    PubMed Central

    Karami, Mahtab

    2016-01-01

    Summary Objective The development and implementation of a dashboard of medical imaging department (MID) performance indicators. Method Several articles discussing performance measures of imaging departments were searched for this study. All the related measures were extracted. Then, a panel of imaging experts were asked to rate these measures with an open ended question to seek further potential indicators. A second round was performed to confirm the performance rating. The indicators and their ratings were then reviewed by an executive panel. Based on the final panel’s rating, a list of indicators to be used was developed. A team of information technology consultants were asked to determine a set of user interface requirements for the building of the dashboard. In the first round, based on the panel’s rating, a list of main features or requirements to be used was determined. Next, Qlikview was utilized to implement the dashboard to visualize a set of selected KPI metrics. Finally, an evaluation of the dashboard was performed. Results 92 MID indicators were identified. On top of this, 53 main user interface requirements to build of the prototype of dashboard were determined. Then, the project team successfully implemented a prototype of radiology management dashboards into study site. The visual display that was designed was rated highly by users. Conclusion To develop a dashboard, management of information is essential. It is recommended that a quality map be designed for the MID. It can be used to specify the sequence of activities, their related indicators and required data for calculating these indicators. To achieve both an effective dashboard and a comprehensive view of operations, it is necessary to design a data warehouse for gathering data from a variety of systems. Utilizing interoperability standards for exchanging data among different systems can be also effective in this regard. PMID:27437043

  8. Web Image Search Re-ranking with Click-based Similarity and Typicality.

    PubMed

    Yang, Xiaopeng; Mei, Tao; Zhang, Yong Dong; Liu, Jie; Satoh, Shin'ichi

    2016-07-20

    In image search re-ranking, besides the well known semantic gap, intent gap, which is the gap between the representation of users' query/demand and the real intent of the users, is becoming a major problem restricting the development of image retrieval. To reduce human effects, in this paper, we use image click-through data, which can be viewed as the "implicit feedback" from users, to help overcome the intention gap, and further improve the image search performance. Generally, the hypothesis visually similar images should be close in a ranking list and the strategy images with higher relevance should be ranked higher than others are widely accepted. To obtain satisfying search results, thus, image similarity and the level of relevance typicality are determinate factors correspondingly. However, when measuring image similarity and typicality, conventional re-ranking approaches only consider visual information and initial ranks of images, while overlooking the influence of click-through data. This paper presents a novel re-ranking approach, named spectral clustering re-ranking with click-based similarity and typicality (SCCST). First, to learn an appropriate similarity measurement, we propose click-based multi-feature similarity learning algorithm (CMSL), which conducts metric learning based on clickbased triplets selection, and integrates multiple features into a unified similarity space via multiple kernel learning. Then based on the learnt click-based image similarity measure, we conduct spectral clustering to group visually and semantically similar images into same clusters, and get the final re-rank list by calculating click-based clusters typicality and withinclusters click-based image typicality in descending order. Our experiments conducted on two real-world query-image datasets with diverse representative queries show that our proposed reranking approach can significantly improve initial search results, and outperform several existing re-ranking approaches.

  9. Telemetry Monitoring and Display Using LabVIEW

    NASA Technical Reports Server (NTRS)

    Wells, George; Baroth, Edmund C.

    1993-01-01

    The Measurement Technology Center of the Instrumentation Section configures automated data acquisition systems to meet the diverse needs of JPL's experimental research community. These systems are based on personal computers or workstations (Apple, IBM/Compatible, Hewlett-Packard, and Sun Microsystems) and often include integrated data analysis, visualization and experiment control functions in addition to data acquisition capabilities. These integrated systems may include sensors, signal conditioning, data acquisition interface cards, software, and a user interface. Graphical programming is used to simplify configuration of such systems. Employment of a graphical programming language is the most important factor in enabling the implementation of data acquisition, analysis, display and visualization systems at low cost. Other important factors are the use of commercial software packages and off-the-shelf data acquisition hardware where possible. Understanding the experimenter's needs is also critical. An interactive approach to user interface construction and training of operators is also important. One application was created as a result of a competative effort between a graphical programming language team and a text-based C language programming team to verify the advantages of using a graphical programming language approach. With approximately eight weeks of funding over a period of three months, the text-based programming team accomplished about 10% of the basic requirements, while the Macintosh/LabVIEW team accomplished about 150%, having gone beyond the original requirements to simulate a telemetry stream and provide utility programs. This application verified that using graphical programming can significantly reduce software development time. As a result of this initial effort, additional follow-on work was awarded to the graphical programming team.

  10. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2015-03-03

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes multiple operand names, each operand corresponding to one or more fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first operands with the columns shelf and to associate one or more second operands with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first operands, and each pane has a y-axis defined based on data for the one or more second operands.

  11. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2015-11-10

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes a plurality of fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first fields with the columns shelf and to associate one or more second fields with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first fields, and each pane has a y-axis defined based on data for the one or more second fields.

  12. Subjective and objective evaluation of visual fatigue on viewing 3D display continuously

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang

    2015-03-01

    In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.

  13. Tablet and Smartphone Accessibility Features in the Low Vision Rehabilitation

    PubMed Central

    Irvine, Danielle; Zemke, Alex; Pusateri, Gregg; Gerlach, Leah; Chun, Rob; Jay, Walter M.

    2014-01-01

    Abstract Tablet and smartphone use is rapidly increasing in developed countries. With this upsurge in popularity, the devices themselves are becoming more user-friendly for all consumers, including the visually impaired. Traditionally, visually impaired patients have received optical rehabilitation in the forms of microscopes, stand magnifiers, handheld magnifiers, telemicroscopes, and electronic magnification such as closed circuit televisions (CCTVs). In addition to the optical and financial limitations of traditional devices, patients do not always view them as being socially acceptable. For this reason, devices are often underutilised by patients due to lack of use in public forums or when among peers. By incorporating smartphones and tablets into a patient’s low vision rehabilitation, in addition to traditional devices, one provides versatile and mainstream options, which may also be less expensive. This article explains exactly what the accessibility features of tablets and smartphones are for the blind and visually impaired, how to access them, and provides an introduction on usage of the features. PMID:27928274

  14. Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models.

    PubMed

    Le Muzic, M; Mindek, P; Sorger, J; Autin, L; Goodsell, D; Viola, I

    2016-06-01

    In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes.

  15. Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models

    PubMed Central

    Le Muzic, M.; Mindek, P.; Sorger, J.; Autin, L.; Goodsell, D.; Viola, I.

    2017-01-01

    In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes. PMID:28344374

  16. 3D visualization and stereographic techniques for medical research and education.

    PubMed

    Rydmark, M; Kling-Petersen, T; Pascher, R; Philip, F

    2001-01-01

    While computers have been able to work with true 3D models for a long time, the same does not apply to the users in common. Over the years, a number of 3D visualization techniques have been developed to enable a scientist or a student, to see not only a flat representation of an object, but also an approximation of its Z-axis. In addition to the traditional flat image representation of a 3D object, at least four established methodologies exist: Stereo pairs. Using image analysis tools or 3D software, a set of images can be made, each representing the left and the right eye view of an object. Placed next to each other and viewed through a separator, the three dimensionality of an object can be perceived. While this is usually done on still images, tests at Mednet have shown this to work with interactively animated models as well. However, this technique requires some training and experience. Pseudo3D, such as VRML or QuickTime VR, where the interactive manipulation of a 3D model lets the user achieve a sense of the model's true proportions. While this technique works reasonably well, it is not a "true" stereographic visualization technique. Red/Green separation, i.e. "the traditional 3D image" where a red and a green representation of a model is superimposed at an angle corresponding to the viewing angle of the eyes and by using a similar set of eyeglasses, a person can create a mental 3D image. The end result does produce a sense of 3D but the effect is difficult to maintain. Alternating left/right eye systems. These systems (typified by the StereoGraphics CrystalEyes system) let the computer display a "left eye" image followed by a "right eye" image while simultaneously triggering the eyepiece to alternatively make one eye "blind". When run at 60 Hz or higher, the brain will fuse the left/right images together and the user will effectively see a 3D object. Depending on configurations, the alternating systems run at between 50 and 60 Hz, thereby creating a flickering effect, which is strenuous for prolonged use. However, all of the above have one or more drawbacks such as high costs, poor quality and localized use. A fifth system, recently released by Barco Systems, modifies the CrystalEyes system by projecting two superimposed images, using polarized light, with the wave plane of the left image at right angle to that of the right image. By using polarized glasses, each eye will see the appropriate image and true stereographic vision is achieved. While the system requires very expensive hardware, it solves some of the more important problems mentioned above, such as the capacity to use higher frame rates and the ability to display images to a large audience. Mednet has instigated a research project which uses reconstructed models from the central nervous system (human brain and basal ganglia, cortex, dendrites and dendritic spines) and peripheral nervous system (nodes of Ranvier and axoplasmic areas). The aim is to modify the models to fit the different visualization techniques mentioned above and compare a group of users perceived degree of 3D for each technique.

  17. Challenges of Replacing NAD 83, NAVD 88, and IGLD 85: Exploiting the Characteristics of 3-D Digital Spatial Data

    NASA Astrophysics Data System (ADS)

    Burkholder, E. F.

    2016-12-01

    One way to address challenges of replacing NAD 83, NGVD 88 and IGLD 85 is to exploit the characteristics of 3-D digital spatial data. This presentation describes the 3-D global spatial data model (GSDM) which accommodates rigorous scientific endeavors while simultaneously supporting a local flat-earth view of the world. The GSDM is based upon the assumption of a single origin for 3-D spatial data and uses rules of solid geometry for manipulating spatial data components. This approach exploits the characteristics of 3-D digital spatial data and preserves the quality of geodetic measurements while providing spatial data users the option of working with rectangular flat-earth components and computational procedures for local applications. This flexibility is provided by using a bidirectional rotation matrix that allows any 3-D vector to be used in a geodetic reference frame for high-end applications and/or the local frame for flat-earth users. The GSDM is viewed as compatible with the datum products being developed by NGS and provides for unambiguous exchange of 3-D spatial data between disciplines and users worldwide. Three geometrical models will be summarized - geodetic, map projection, and 3-D. Geodetic computations are performed on an ellipsoid and are without equal in providing rigorous coordinate values for latitude, longitude, and ellipsoid height. Members of the user community have, for generations, sought ways to "flatten the world" to accommodate a flat-earth view and to avoid the complexity of working on an ellipsoid. Map projections have been defined for a wide variety of applications and remain very useful for visualizing spatial data. But, the GSDM supports computations based on 3-D components that have not been distorted in a 2-D map projection. The GSDM does not invalidate either geodesy or cartographic computational processes but provides a geometrically correct view of any point cloud from any point selected by the user. As a bonus, the GSDM also defines spatial data accuracy and includes procedures for establishing, tracking and using spatial data accuracy - increasingly important in many applications but especially relevant given development of procedures for tracking drones (primarily absolute) and intelligent vehicles (primarily relative).

  18. Representing Graphical User Interfaces with Sound: A Review of Approaches

    ERIC Educational Resources Information Center

    Ratanasit, Dan; Moore, Melody M.

    2005-01-01

    The inability of computer users who are visually impaired to access graphical user interfaces (GUIs) has led researchers to propose approaches for adapting GUIs to auditory interfaces, with the goal of providing access for visually impaired people. This article outlines the issues involved in nonvisual access to graphical user interfaces, reviews…

  19. Penn State's Visual Image User Study

    ERIC Educational Resources Information Center

    Pisciotta, Henry A.; Dooris, Michael J.; Frost, James; Halm, Michael

    2005-01-01

    The Visual Image User Study (VIUS), an extensive needs assessment project at Penn State University, describes academic users of pictures and their perceptions. These findings outline the potential market for digital images and list the likely determinates of whether or not a system will be used. They also explain some key user requirements for…

  20. Visual Information for the Desktop, version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2006-03-29

    VZIN integrates visual analytics capabilities into popular desktop tools to aid a user in searching and understanding an information space. VZIN allows users to Drag-Drop-Visualize-Explore-Organize information within tools such as Microsoft Office, Windows Explorer, Excel, and Outlook. VZIN is tailorable to specific client or industry requirements. VZIN follows the desktop metaphors so that advanced analytical capabilities are available with minimal user training.

  1. Development of a customizable software application for medical imaging analysis and visualization.

    PubMed

    Martinez-Escobar, Marisol; Peloquin, Catherine; Juhnke, Bethany; Peddicord, Joanna; Jose, Sonia; Noon, Christian; Foo, Jung Leng; Winer, Eliot

    2011-01-01

    Graphics technology has extended medical imaging tools to the hands of surgeons and doctors, beyond the radiology suite. However, a common issue in most medical imaging software is the added complexity for non-radiologists. This paper presents the development of a unique software toolset that is highly customizable and targeted at the general physicians as well as the medical specialists. The core functionality includes features such as viewing medical images in two-and three-dimensional representations, clipping, tissue windowing, and coloring. Additional features can be loaded in the form of 'plug-ins' such as tumor segmentation, tissue deformation, and surgical planning. This allows the software to be lightweight and easy to use while still giving the user the flexibility of adding the necessary features, thus catering to a wide range of user population.

  2. From Vesalius to virtual reality: How embodied cognition facilitates the visualization of anatomy

    NASA Astrophysics Data System (ADS)

    Jang, Susan

    This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and motorically embodied in our minds. For example, people take longer to rotate mentally an image of their hand not only when there is a greater degree of rotation, but also when the images are presented in a manner incompatible with their natural body movement (Parsons, 1987a, 1994; Cooper & Shepard, 1975; Sekiyama, 1983). Such findings confirm the notion that our mental images and rotations of those images are in fact confined by the laws of physics and biomechanics, because we perceive, think and reason in an embodied fashion. With the advancement of new technologies, virtual reality programs for medical education now enable users to interact directly in a 3-D environment with internal anatomical structures. Given that such structures are not readily viewable to users and thus not previously susceptible to embodiment, coupled with the VR environment also affording all possible degrees of rotation, how people learn from these programs raises new questions. If we embody external anatomical parts we can see, such as our hands and feet, can we embody internal anatomical parts we cannot see? Does manipulating the anatomical part in virtual space facilitate the user's embodiment of that structure and therefore the ability to visualize the structure mentally? Medical students grouped in yoked-pairs were tasked with mastering the spatial configuration of an internal anatomical structure; only one group was allowed to manipulate the images of this anatomical structure in a 3-D VR environment, whereas the other group could only view the manipulation. The manipulation group outperformed the visual group, suggesting that the interactivity that took place among the manipulation group promoted visual and motoric embodiment, which in turn enhanced learning. Moreover, when accounting for spatial ability, it was found that manipulation benefits students with low spatial ability more than students with high spatial ability.

  3. Lexical interference effects in sentence processing: Evidence from the visual world paradigm and self-organizing models

    PubMed Central

    Kukona, Anuenue; Cho, Pyeong Whan; Magnuson, James S.; Tabor, Whitney

    2014-01-01

    Psycholinguistic research spanning a number of decades has produced diverging results with regard to the nature of constraint integration in online sentence processing. For example, evidence that language users anticipatorily fixate likely upcoming referents in advance of evidence in the speech signal supports rapid context integration. By contrast, evidence that language users activate representations that conflict with contextual constraints, or only indirectly satisfy them, supports non-integration or late integration. Here, we report on a self-organizing neural network framework that addresses one aspect of constraint integration: the integration of incoming lexical information (i.e., an incoming word) with sentence context information (i.e., from preceding words in an unfolding utterance). In two simulations, we show that the framework predicts both classic results concerned with lexical ambiguity resolution (Swinney, 1979; Tanenhaus, Leiman, & Seidenberg, 1979), which suggest late context integration, and results demonstrating anticipatory eye movements (e.g., Altmann & Kamide, 1999), which support rapid context integration. We also report two experiments using the visual world paradigm that confirm a new prediction of the framework. Listeners heard sentences like “The boy will eat the white…,” while viewing visual displays with objects like a white cake (i.e., a predictable direct object of “eat”), white car (i.e., an object not predicted by “eat,” but consistent with “white”), and distractors. Consistent with our simulation predictions, we found that while listeners fixated white cake most, they also fixated white car more than unrelated distractors in this highly constraining sentence (and visual) context. PMID:24245535

  4. BrainNet Viewer: a network visualization tool for human brain connectomics.

    PubMed

    Xia, Mingrui; Wang, Jinhui; He, Yong

    2013-01-01

    The human brain is a complex system whose topological organization can be represented using connectomics. Recent studies have shown that human connectomes can be constructed using various neuroimaging technologies and further characterized using sophisticated analytic strategies, such as graph theory. These methods reveal the intriguing topological architectures of human brain networks in healthy populations and explore the changes throughout normal development and aging and under various pathological conditions. However, given the huge complexity of this methodology, toolboxes for graph-based network visualization are still lacking. Here, using MATLAB with a graphical user interface (GUI), we developed a graph-theoretical network visualization toolbox, called BrainNet Viewer, to illustrate human connectomes as ball-and-stick models. Within this toolbox, several combinations of defined files with connectome information can be loaded to display different combinations of brain surface, nodes and edges. In addition, display properties, such as the color and size of network elements or the layout of the figure, can be adjusted within a comprehensive but easy-to-use settings panel. Moreover, BrainNet Viewer draws the brain surface, nodes and edges in sequence and displays brain networks in multiple views, as required by the user. The figure can be manipulated with certain interaction functions to display more detailed information. Furthermore, the figures can be exported as commonly used image file formats or demonstration video for further use. BrainNet Viewer helps researchers to visualize brain networks in an easy, flexible and quick manner, and this software is freely available on the NITRC website (www.nitrc.org/projects/bnv/).

  5. EMPIRICAL STUDY ON USABILITY OF CROSSING SUPPORT SYSTEM FOR VISUALLY DISABLED AT SIGNALIZED INTERSECTION

    NASA Astrophysics Data System (ADS)

    Suzuki, Koji; Fujita, Motohiro; Matsuura, Kazuma; Fukuzono, Kazuyuki

    This paper evaluates the adjustment process for crossing support system for visually disabled at signalized intersections with the use of pedestrian traffic signals in concert with visible light communication (VLC) technology through outdoor experiments. As for the experiments, we put a blindfold on sighted people by eye mask in order to analyze the behavior of acquired visually disabled. And we used a full-scale crosswalk which is taking into consideration the crossing slope, the bumps at the edge of a crosswalk between the roadway and the sidewalkand crosswalk line. From the results of the survey, it is found that repetitive use of the VLC system decreased the number of lost their bearings completely and ended up standing immobile and reduced the crossing time for each person. On the other hand, it is shown that the performance of our VLC system is nearly equal to the existing support system from the view point of crossing time and the number of standing immobile and we clarified the effect factor for guidance accuracy by the regression analyses. Then we broke test subjects down into patterns by cluster analysis, and explained the walking characteristics for each group as they used the VLC system. In addition, we conducted the additional surveys for the quasi-blind subjects who had difficulty walking by using VLC system and visually impaired users. As a result, it is revealed that guidance accuracy was improved by providing the information about their receiving movement at several points on crosswalk and the habit of their walks for each user.

  6. Protective laser beam viewing device

    DOEpatents

    Neil, George R.; Jordan, Kevin Carl

    2012-12-18

    A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.

  7. Electrophysiological evidence for altered visual, but not auditory, selective attention in adolescent cochlear implant users.

    PubMed

    Harris, Jill; Kamke, Marc R

    2014-11-01

    Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. FlyAR: augmented reality supported micro aerial vehicle navigation.

    PubMed

    Zollmann, Stefanie; Hoppe, Christof; Langlotz, Tobias; Reitmayr, Gerhard

    2014-04-01

    Micro aerial vehicles equipped with high-resolution cameras can be used to create aerial reconstructions of an area of interest. In that context automatic flight path planning and autonomous flying is often applied but so far cannot fully replace the human in the loop, supervising the flight on-site to assure that there are no collisions with obstacles. Unfortunately, this workflow yields several issues, such as the need to mentally transfer the aerial vehicle’s position between 2D map positions and the physical environment, and the complicated depth perception of objects flying in the distance. Augmented Reality can address these issues by bringing the flight planning process on-site and visualizing the spatial relationship between the planned or current positions of the vehicle and the physical environment. In this paper, we present Augmented Reality supported navigation and flight planning of micro aerial vehicles by augmenting the user’s view with relevant information for flight planning and live feedback for flight supervision. Furthermore, we introduce additional depth hints supporting the user in understanding the spatial relationship of virtual waypoints in the physical world and investigate the effect of these visualization techniques on the spatial understanding.

  9. Updates to SCORPION persistent surveillance system with universal gateway

    NASA Astrophysics Data System (ADS)

    Coster, Michael; Chambers, Jon; Winters, Michael; Brunck, Al

    2008-10-01

    This paper addresses benefits derived from the universal gateway utilized in Northrop Grumman Systems Corporation's (NGSC) SCORPION, a persistent surveillance and target recognition system produced by the Xetron campus in Cincinnati, Ohio. SCORPION is currently deployed in Operations Iraqi Freedom (OIF) and Enduring Freedom (OEF). The SCORPION universal gateway is a flexible, field programmable system that provides integration of over forty Unattended Ground Sensor (UGS) types from a variety of manufacturers, multiple visible and thermal electro-optical (EO) imagers, and numerous long haul satellite and terrestrial communications links, including the Army Research Lab (ARL) Blue Radio. Xetron has been integrating best in class sensors with this universal gateway to provide encrypted data exfiltration to Common Operational Picture (COP) systems and remote sensor command and control since 1998. In addition to being fed to COP systems, SCORPION data can be visualized in the Common sensor Status (CStat) graphical user interface that allows for viewing and analysis of images and sensor data from up to seven hundred SCORPION system gateways on single or multiple displays. This user friendly visualization enables a large amount of sensor data and imagery to be used as actionable intelligence by a minimum number of analysts.

  10. GenomeGems: evaluation of genetic variability from deep sequencing data

    PubMed Central

    2012-01-01

    Background Detection of disease-causing mutations using Deep Sequencing technologies possesses great challenges. In particular, organizing the great amount of sequences generated so that mutations, which might possibly be biologically relevant, are easily identified is a difficult task. Yet, for this assignment only limited automatic accessible tools exist. Findings We developed GenomeGems to gap this need by enabling the user to view and compare Single Nucleotide Polymorphisms (SNPs) from multiple datasets and to load the data onto the UCSC Genome Browser for an expanded and familiar visualization. As such, via automatic, clear and accessible presentation of processed Deep Sequencing data, our tool aims to facilitate ranking of genomic SNP calling. GenomeGems runs on a local Personal Computer (PC) and is freely available at http://www.tau.ac.il/~nshomron/GenomeGems. Conclusions GenomeGems enables researchers to identify potential disease-causing SNPs in an efficient manner. This enables rapid turnover of information and leads to further experimental SNP validation. The tool allows the user to compare and visualize SNPs from multiple experiments and to easily load SNP data onto the UCSC Genome browser for further detailed information. PMID:22748151

  11. Detection and quantification of flow consistency in business process models.

    PubMed

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara

    2018-01-01

    Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.

  12. Orientation-Enhanced Parallel Coordinate Plots.

    PubMed

    Raidou, Renata Georgia; Eisemann, Martin; Breeuwer, Marcel; Eisemann, Elmar; Vilanova, Anna

    2016-01-01

    Parallel Coordinate Plots (PCPs) is one of the most powerful techniques for the visualization of multivariate data. However, for large datasets, the representation suffers from clutter due to overplotting. In this case, discerning the underlying data information and selecting specific interesting patterns can become difficult. We propose a new and simple technique to improve the display of PCPs by emphasizing the underlying data structure. Our Orientation-enhanced Parallel Coordinate Plots (OPCPs) improve pattern and outlier discernibility by visually enhancing parts of each PCP polyline with respect to its slope. This enhancement also allows us to introduce a novel and efficient selection method, the Orientation-enhanced Brushing (O-Brushing). Our solution is particularly useful when multiple patterns are present or when the view on certain patterns is obstructed by noise. We present the results of our approach with several synthetic and real-world datasets. Finally, we conducted a user evaluation, which verifies the advantages of the OPCPs in terms of discernibility of information in complex data. It also confirms that O-Brushing eases the selection of data patterns in PCPs and reduces the amount of necessary user interactions compared to state-of-the-art brushing techniques.

  13. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    PubMed Central

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Wetzstein, Gordon

    2017-01-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one. PMID:28193871

  14. Lighting design for globally illuminated volume rendering.

    PubMed

    Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.

  15. BlockLogo: visualization of peptide and sequence motif conservation

    PubMed Central

    Olsen, Lars Rønn; Kudahl, Ulrich Johan; Simon, Christian; Sun, Jing; Schönbach, Christian; Reinherz, Ellis L.; Zhang, Guang Lan; Brusic, Vladimir

    2013-01-01

    BlockLogo is a web-server application for visualization of protein and nucleotide fragments, continuous protein sequence motifs, and discontinuous sequence motifs using calculation of block entropy from multiple sequence alignments. The user input consists of a multiple sequence alignment, selection of motif positions, type of sequence, and output format definition. The output has BlockLogo along with the sequence logo, and a table of motif frequencies. We deployed BlockLogo as an online application and have demonstrated its utility through examples that show visualization of T-cell epitopes and B-cell epitopes (both continuous and discontinuous). Our additional example shows a visualization and analysis of structural motifs that determine specificity of peptide binding to HLA-DR molecules. The BlockLogo server also employs selected experimentally validated prediction algorithms to enable on-the-fly prediction of MHC binding affinity to 15 common HLA class I and class II alleles as well as visual analysis of discontinuous epitopes from multiple sequence alignments. It enables the visualization and analysis of structural and functional motifs that are usually described as regular expressions. It provides a compact view of discontinuous motifs composed of distant positions within biological sequences. BlockLogo is available at: http://research4.dfci.harvard.edu/cvc/blocklogo/ and http://methilab.bu.edu/blocklogo/ PMID:24001880

  16. A Network and Visual Quality Aware N-Screen Content Recommender System Using Joint Matrix Factorization

    PubMed Central

    Ullah, Farman; Sarwar, Ghulam; Lee, Sungchang

    2014-01-01

    We propose a network and visual quality aware N-Screen content recommender system. N-Screen provides more ways than ever before to access multimedia content through multiple devices and heterogeneous access networks. The heterogeneity of devices and access networks present new questions of QoS (quality of service) in the realm of user experience with content. We propose, a recommender system that ensures a better visual quality on user's N-screen devices and the efficient utilization of available access network bandwidth with user preferences. The proposed system estimates the available bandwidth and visual quality on users N-Screen devices and integrates it with users preferences and contents genre information to personalize his N-Screen content. The objective is to recommend content that the user's N-Screen device and access network are capable of displaying and streaming with the user preferences that have not been supported in existing systems. Furthermore, we suggest a joint matrix factorization approach to jointly factorize the users rating matrix with the users N-Screen device similarity and program genres similarity. Finally, the experimental results show that we also enhance the prediction and recommendation accuracy, sparsity, and cold start issues. PMID:24982999

  17. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    PubMed

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  18. Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization

    PubMed Central

    Marai, G. Elisabeta

    2018-01-01

    Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550

  19. Securing information display by use of visual cryptography.

    PubMed

    Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo

    2003-09-01

    We propose a secure display technique based on visual cryptography. The proposed technique ensures the security of visual information. The display employs a decoding mask based on visual cryptography. Without the decoding mask, the displayed information cannot be viewed. The viewing zone is limited by the decoding mask so that only one person can view the information. We have developed a set of encryption codes to maintain the designed viewing zone and have demonstrated a display that provides a limited viewing zone.

  20. Multi-scale visual analysis of time-varying electrocorticography data via clustering of brain regions

    DOE PAGES

    Murugesan, Sugeerth; Bouchard, Kristofer; Chang, Edward; ...

    2017-06-06

    There exists a need for effective and easy-to-use software tools supporting the analysis of complex Electrocorticography (ECoG) data. Understanding how epileptic seizures develop or identifying diagnostic indicators for neurological diseases require the in-depth analysis of neural activity data from ECoG. Such data is multi-scale and is of high spatio-temporal resolution. Comprehensive analysis of this data should be supported by interactive visual analysis methods that allow a scientist to understand functional patterns at varying levels of granularity and comprehend its time-varying behavior. We introduce a novel multi-scale visual analysis system, ECoG ClusterFlow, for the detailed exploration of ECoG data. Our systemmore » detects and visualizes dynamic high-level structures, such as communities, derived from the time-varying connectivity network. The system supports two major views: 1) an overview summarizing the evolution of clusters over time and 2) an electrode view using hierarchical glyph-based design to visualize the propagation of clusters in their spatial, anatomical context. We present case studies that were performed in collaboration with neuroscientists and neurosurgeons using simulated and recorded epileptic seizure data to demonstrate our system's effectiveness. ECoG ClusterFlow supports the comparison of spatio-temporal patterns for specific time intervals and allows a user to utilize various clustering algorithms. Neuroscientists can identify the site of seizure genesis and its spatial progression during various the stages of a seizure. Our system serves as a fast and powerful means for the generation of preliminary hypotheses that can be used as a basis for subsequent application of rigorous statistical methods, with the ultimate goal being the clinical treatment of epileptogenic zones.« less

  1. PyMOL mControl: Manipulating molecular visualization with mobile devices.

    PubMed

    Lam, Wendy W T; Siu, Shirley W I

    2017-01-02

    Viewing and manipulating three-dimensional (3D) structures in molecular graphics software are essential tasks for researchers and students to understand the functions of molecules. Currently, the way to manipulate a 3D molecular object is mainly based on mouse-and-keyboard control that is usually difficult and tedious to learn. While gesture-based and touch-based interactions are increasingly popular in interactive software systems, their suitability in handling molecular graphics has not yet been sufficiently explored. Here, we designed the gesture-based and touch-based interaction methods to manipulate virtual objects in PyMOL utilizing the motion and touch sensors in a mobile device. Three fundamental viewing controls-zooming, translation and rotation-and frequently used functions were implemented. Results from a pilot user study reveal that task performances on viewing controls using a mobile device are slightly reduced as compared to mouse-and-keyboard method. However, it is considered to be more suitable for oral presentations and equally suitable for education scenarios such as school classes. Overall, PyMOL mControl provides an alternative way to manipulate objects in molecular graphic software with new user experiences. The software is freely available at http://cbbio.cis.umac.mo/mcontrol.html. © 2016 by The International Union of Biochemistry and Molecular Biology, 45(1):76-83, 2017. © 2016 The International Union of Biochemistry and Molecular Biology.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, W.

    Building something which could be called {open_quotes}virtual reality{close_quotes} (VR) is something of a challenge, particularly when nobody really seems to agree on a definition of VR. The author wanted to combine scientific visualization with VR, resulting in an environment useful for assisting scientific research. He demonstrates the combination of VR and scientific visualization in a prototype application. The VR application constructed consists of a dataflow based system for performing scientific visualization (AVS), extensions to the system to support VR input devices and a numerical simulation ported into the dataflow environment. The VR system includes two inexpensive, off-the-shelf VR devices andmore » some custom code. A working system was assembled with about two man-months of effort. The system allows the user to specify parameters for a chemical flooding simulation as well as some viewing parameters using VR input devices, as well as view the output using VR output devices. In chemical flooding, there is a subsurface region that contains chemicals which are to be removed. Secondary oil recovery and environmental remediation are typical applications of chemical flooding. The process assumes one or more injection wells, and one or more production wells. Chemicals or water are pumped into the ground, mobilizing and displacing hydrocarbons or contaminants. The placement of the production and injection wells, and other parameters of the wells, are the most important variables in the simulation.« less

  3. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  4. Head-mounted spatial instruments II: Synthetic reality or impossible dream

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Grunwald, Arthur

    1989-01-01

    A spatial instrument is defined as a spatial display which has been either geometrically or symbolically enhanced to enable a user to accomplish a particular task. Research conducted over the past several years on 3-D spatial instruments has shown that perspective displays, even when viewed from the correct viewpoint, are subject to systematic viewer biases. These biases interfere with correct spatial judgements of the presented pictorial information. The design of spatial instruments may not only require the introduction of compensatory distortions to remove the naturally occurring biases but also may significantly benefit from the introduction of artificial distortions which enhance performance. However, these image manipulations can cause a loss of visual-vestibular coordination and induce motion sickness. Consequently, the design of head-mounted spatial instruments will require an understanding of the tolerable limits of visual-vestibular discord.

  5. A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity

    PubMed Central

    Lomp, Oliver; Faubel, Christian; Schöner, Gregor

    2017-01-01

    Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145

  6. Assisting the visually impaired: obstacle detection and warning system by acoustic feedback.

    PubMed

    Rodríguez, Alberto; Yebes, J Javier; Alcantarilla, Pablo F; Bergasa, Luis M; Almazán, Javier; Cela, Andrés

    2012-12-17

    The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system.

  7. Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback

    PubMed Central

    Rodríguez, Alberto; Yebes, J. Javier; Alcantarilla, Pablo F.; Bergasa, Luis M.; Almazán, Javier; Cela, Andrés

    2012-01-01

    The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system. PMID:23247413

  8. Other drug use does not impact cognitive impairments in chronic ketamine users.

    PubMed

    Zhang, Chenxi; Tang, Wai Kwong; Liang, Hua Jun; Ungvari, Gabor Sandor; Lin, Shih-Ku

    2018-05-01

    Ketamine abuse causes cognitive impairments, which negatively impact on users' abstinence, prognosis, and quality of life. of cognitive impairments in chronic ketamine users have been inconsistent across studies, possibly due to the small sample sizes and the confounding effects of concomitant use of other illicit drugs. This study investigated the cognitive impairment and its related factors in chronic ketamine users with a large sample size and explored the impact of another drug use on cognitive functions. Cognitive functions, including working, verbal and visual memory and executive functions were assessed in ketamine users: 286 non-heavy other drug users and 279 heavy other drug users, and 261 healthy controls. Correlations between cognitive impairment and patterns of ketamine use were analysed. Verbal and visual memory were impaired, but working memory and executive functions were intact for all ketamine users. No significant cognitive differences were found between the two ketamine groups. Greater number of days of ketamine use in the past month was associated with worse visual memory performance in non-heavy other drug users. Higher dose of ketamine use was associated with worse short-term verbal memory in heavy other drug users. Verbal and visual memory are impaired in chronic ketamine users. Other drug use appears to have no impact on ketamine users' cognitive performance. Copyright © 2018. Published by Elsevier B.V.

  9. Optical cross-talk and visual comfort of a stereoscopic display used in a real-time application

    NASA Astrophysics Data System (ADS)

    Pala, S.; Stevens, R.; Surman, P.

    2007-02-01

    Many 3D systems work by presenting to the observer stereoscopic pairs of images that are combined to give the impression of a 3D image. Discomfort experienced when viewing for extended periods may be due to several factors, including the presence of optical crosstalk between the stereo image channels. In this paper we use two video cameras and two LCD panels viewed via a Helmholtz arrangement of mirrors, to display a stereoscopic image inherently free of crosstalk. Simple depth discrimination tasks are performed whilst viewing the 3D image and controlled amounts of image crosstalk are introduced by electronically mixing the video signals. Error monitoring and skin conductance are used as measures of workload as well as traditional subjective questionnaires. We report qualitative measurements of user workload under a variety of viewing conditions. This pilot study revealed a decrease in task performance and increased workload as crosstalk was increased. The observations will assist in the design of further trials planned to be conducted in a medical environment.

  10. Saliency in VR: How Do People Explore Virtual Environments?

    PubMed

    Sitzmann, Vincent; Serrano, Ana; Pavel, Amy; Agrawala, Maneesh; Gutierrez, Diego; Masia, Belen; Wetzstein, Gordon

    2018-04-01

    Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-basedcompression.

  11. Visualization of diversity in large multivariate data sets.

    PubMed

    Pham, Tuan; Hess, Rob; Ju, Crystal; Zhang, Eugene; Metoyer, Ronald

    2010-01-01

    Understanding the diversity of a set of multivariate objects is an important problem in many domains, including ecology, college admissions, investing, machine learning, and others. However, to date, very little work has been done to help users achieve this kind of understanding. Visual representation is especially appealing for this task because it offers the potential to allow users to efficiently observe the objects of interest in a direct and holistic way. Thus, in this paper, we attempt to formalize the problem of visualizing the diversity of a large (more than 1000 objects), multivariate (more than 5 attributes) data set as one worth deeper investigation by the information visualization community. In doing so, we contribute a precise definition of diversity, a set of requirements for diversity visualizations based on this definition, and a formal user study design intended to evaluate the capacity of a visual representation for communicating diversity information. Our primary contribution, however, is a visual representation, called the Diversity Map, for visualizing diversity. An evaluation of the Diversity Map using our study design shows that users can judge elements of diversity consistently and as or more accurately than when using the only other representation specifically designed to visualize diversity.

  12. Managing Rock and Paleomagnetic Data Flow with the MagIC Database: from Measurement and Analysis to Comprehensive Archive and Visualization

    NASA Astrophysics Data System (ADS)

    Koppers, A. A.; Minnett, R. C.; Tauxe, L.; Constable, C.; Donadini, F.

    2008-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by rock and paleomagnetic data. The goal of MagIC is to archive all measurements and derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Organizing data for presentation in peer-reviewed publications or for ingestion into databases is a time-consuming task, and to facilitate these activities, three tightly integrated tools have been developed: MagIC-PY, the MagIC Console Software, and the MagIC Online Database. A suite of Python scripts is available to help users port their data into the MagIC data format. They allow the user to add important metadata, perform basic interpretations, and average results at the specimen, sample and site levels. These scripts have been validated for use as Open Source software under the UNIX, Linux, PC and Macintosh© operating systems. We have also developed the MagIC Console Software program to assist in collating rock and paleomagnetic data for upload to the MagIC database. The program runs in Microsoft Excel© on both Macintosh© computers and PCs. It performs routine consistency checks on data entries, and assists users in preparing data for uploading into the online MagIC database. The MagIC website is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual FlashMap interface to browse and select locations. Users can also browse the database by data type (inclination, intensity, VGP, hysteresis, susceptibility) or by data compilation to view all contributions associated with previous databases, such as PINT, GMPDB or TAFI or other user-defined compilations. Query results are displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, when supported by the data, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams.

  13. GeoMapApp as a platform for visualizing marine data from Polar Regions

    NASA Astrophysics Data System (ADS)

    Nitsche, F. O.; Ryan, W. B.; Carbotte, S. M.; Ferrini, V.; Goodwillie, A. M.; O'hara, S. H.; Weissel, R.; McLain, K.; Chinhong, C.; Arko, R. A.; Chan, S.; Morton, J. J.; Pomeroy, D.

    2012-12-01

    To maximize the investment in expensive fieldwork the resulting data should be re-used as much as possible. In addition, unnecessary duplication of data collection effort should be avoided. This becomes even more important if access to field areas is as difficult and expensive as it is in Polar Regions. Making existing data discoverable in an easy to use platform is key to improve re-use and avoid duplication. A common obstacle is that use of existing data is often limited to specialists who know of the data existence and also have the right tools to view and analyze these data. GeoMapApp is a free, interactive, map based tool that allows users to discover, visualize, and analyze a large number of data sets. In addition to a global view, it provides polar map projections for displaying data in Arctic and Antarctic areas. Data that have currently been added to the system include Arctic swath bathymetry data collected from the USCG icebreaker Healy. These data are collected almost continuously including from cruises where bathymetry is not the main objective and for which existence of the acquired data may not be well known. In contrast, existence of seismic data from the Antarctic continental margin is well known in the seismic community. They are archived at and can be accessed through the Antarctic Seismic Data Library System (SDLS). Incorporating these data into GeoMapApp makes an even broader community aware of these data and the custom interface, which includes capabilities to visualize and explore these data, allows users without specific software or knowledge of the underlying data format to access the data. In addition to investigating these datasets, GeoMapApp provides links to the actual data sources to allow specialists the opportunity to re-use the original data. Important identification of data sources and data references are achieved on different levels. For access to the actual Antarctic seismic data GeoMapApp links to the SDLS site, where users have to register before downloading the data and where they are informed about data owners. For the swath bathymetry data GeoMapApp links to an IEDA/MGDS web page for each cruise containing detailed information about investigators and surveys.

  14. Developing Aesthetically Compelling Visualizations for Documenting and Communicating Alaskan Glacier and Landscape Change

    NASA Astrophysics Data System (ADS)

    Molnia, B. F.

    2016-12-01

    For 50 years I have investigated glacier dynamics and attempted to convey this information to others. Since 2000, my focus has been on capturing and documenting decadal and century-scale Alaskan glacier and landscape change using precision repeat photography and on broadly communicate these results through simple, aesthetically compelling, unambiguous visualizations. As a young geologist, I spent the summer of 1968 on the Juneau Icefield, photographing its surface features and margins. Since then, I have taken 150,000 photographs of Alaskan glaciers and collected 5,000 historical Alaskan photographs taken by other, the earliest dating back to 1883. This database and my passion for photographing glaciers became the basis for an on-going investigation aimed at visually documenting glacier and landscapes change at more than 200 previously photographed Alaskan locations in Glacier Bay and Kenai Fjords National Parks, Prince William Sound, and the Coast Mountains. Repeat photography is a technique in which a historical and a modern photograph, both having similar fields of view, are compared and contrasted to quantitatively and qualitatively determine their similarities and differences. In precision repeat photography, both photographs have the same field of view, ideally being photographed from the identical location. Since 2000, I have conducted nearly 20 field campaigns to systematically revisit and re-photograph more than 225 fields of view previously captured in the historical photographs. As aesthetics are important in successfully communicating what has changed, substantial time and effort is invested in capturing new, comparable, generally cloud free photographs at each revisited site. The resulting modern images are then paired with similar field-of-view historical images to produce compelling, aesthetic photo pairs which depict long-term glacier, landscape, and ecosystem changes. As a few sites have multiple historical images, photo triplets or quadruplets are sometimes possible. Several approaches have been tried to produce aesthetic compelling visualization. These have included sliders, dissolves, adjacent pairs, a website, and DVDs. Providing high resolution pairs to users and letting them adapt the images to their individual needs has also been very successful.

  15. Content-based Music Search and Recommendation System

    NASA Astrophysics Data System (ADS)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  16. Mergeomics: a web server for identifying pathological pathways, networks, and key regulators via multidimensional data integration.

    PubMed

    Arneson, Douglas; Bhattacharya, Anindya; Shu, Le; Mäkinen, Ville-Petteri; Yang, Xia

    2016-09-09

    Human diseases are commonly the result of multidimensional changes at molecular, cellular, and systemic levels. Recent advances in genomic technologies have enabled an outpour of omics datasets that capture these changes. However, separate analyses of these various data only provide fragmented understanding and do not capture the holistic view of disease mechanisms. To meet the urgent needs for tools that effectively integrate multiple types of omics data to derive biological insights, we have developed Mergeomics, a computational pipeline that integrates multidimensional disease association data with functional genomics and molecular networks to retrieve biological pathways, gene networks, and central regulators critical for disease development. To make the Mergeomics pipeline available to a wider research community, we have implemented an online, user-friendly web server ( http://mergeomics. idre.ucla.edu/ ). The web server features a modular implementation of the Mergeomics pipeline with detailed tutorials. Additionally, it provides curated genomic resources including tissue-specific expression quantitative trait loci, ENCODE functional annotations, biological pathways, and molecular networks, and offers interactive visualization of analytical results. Multiple computational tools including Marker Dependency Filtering (MDF), Marker Set Enrichment Analysis (MSEA), Meta-MSEA, and Weighted Key Driver Analysis (wKDA) can be used separately or in flexible combinations. User-defined summary-level genomic association datasets (e.g., genetic, transcriptomic, epigenomic) related to a particular disease or phenotype can be uploaded and computed real-time to yield biologically interpretable results, which can be viewed online and downloaded for later use. Our Mergeomics web server offers researchers flexible and user-friendly tools to facilitate integration of multidimensional data into holistic views of disease mechanisms in the form of tissue-specific key regulators, biological pathways, and gene networks.

  17. CCProf: exploring conformational change profile of proteins

    PubMed Central

    Chang, Che-Wei; Chou, Chai-Wei; Chang, Darby Tien-Hao

    2016-01-01

    In many biological processes, proteins have important interactions with various molecules such as proteins, ions or ligands. Many proteins undergo conformational changes upon these interactions, where regions with large conformational changes are critical to the interactions. This work presents the CCProf platform, which provides conformational changes of entire proteins, named conformational change profile (CCP) in the context. CCProf aims to be a platform where users can study potential causes of novel conformational changes. It provides 10 biological features, including conformational change, potential binding target site, secondary structure, conservation, disorder propensity, hydropathy propensity, sequence domain, structural domain, phosphorylation site and catalytic site. All these information are integrated into a well-aligned view, so that researchers can capture important relevance between different biological features visually. The CCProf contains 986 187 protein structure pairs for 3123 proteins. In addition, CCProf provides a 3D view in which users can see the protein structures before and after conformational changes as well as binding targets that induce conformational changes. All information (e.g. CCP, binding targets and protein structures) shown in CCProf, including intermediate data are available for download to expedite further analyses. Database URL: http://zoro.ee.ncku.edu.tw/ccprof/ PMID:27016699

  18. Uploading, Searching and Visualizing of Paleomagnetic and Rock Magnetic Data in the Online MagIC Database

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Donadini, F.

    2007-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all available measurements and derived properties from paleomagnetic studies of directions and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and will soon implement two search nodes, one for paleomagnetism and one for rock magnetism. Currently the PMAG node is operational. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. Users can also browse the database by data type or by data compilation to view all contributions associated with well known earlier collections like PINT, GMPDB or PSVRL. The query result set is displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, where appropriate, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (version 2.3) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process tens of thousands of data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they remain available for download by the public (in both text and Excel format). Finally, the contents of these template files are automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. During the upload process the owner has the option of keeping the contribution private so it can be viewed in the context of other data sets and visualized using the suite of MagIC plotting tools. Alternatively, the new data can be password protected and shared with a group of users at the contributor's discretion. Once they are published and the owner is comfortable making the upload publicly accessible, the MagIC Editing Committee reviews the contribution for adherence to the MagIC data model and conventions to ensure a high level of data integrity.

  19. [Change settings for visual analyzer of child users of mobile communication: longitudinal study].

    PubMed

    Khorseva, N I; Grigor'ev, Iu G; Gorbunova, N V

    2014-01-01

    The paper represents theresults of longitudinal monitoring of the changes in the parameters of simple visual-motor reaction, the visual acuity and the rate of the visual discrimination in the child users of mobile communication, which indicate the multivariability of the possible effects of radiation from mobile phones on the auditory system of children.

  20. Multisensory emotion perception in congenitally, early, and late deaf CI users

    PubMed Central

    Nava, Elena; Villwock, Agnes K.; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences. PMID:29023525

  1. Multisensory emotion perception in congenitally, early, and late deaf CI users.

    PubMed

    Fengler, Ineke; Nava, Elena; Villwock, Agnes K; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.

  2. GES DAAC HDF Data Processing and Visualization Tools

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Cho, S.; Johnson, J.; Li, J.; Liu, Z.; Lu, L.; Pollack, N.; Qin, J.; Savtchenko, A.; Teng, B.

    2002-12-01

    The Goddard Earth Sciences (GES) Distributed Active Archive Center (DAAC) plays a major role in enabling basic scientific research and providing access to scientific data to the general user community. Several GES DAAC Data Support Teams provide expert assistance to users in accessing data, including information on visualization tools and documentation for data products. To provide easy access to the science data, the data support teams have additionally developed many online and desktop tools for data processing and visualization. This presentation is an overview of major HDF tools implemented at the GES DAAC and aimed at optimizing access to EOS data for the Earth Sciences community. GES DAAC ONLINE TOOLS: MODIS and AIRS on-demand Channel/Variable Subsetter are web-based, on-the-fly/on-demand subsetters that perform channel/variable subsetting and restructuring for Level1B and Level 2 data products. Users can specify criteria to subset data files with desired channels and variables and then download the subsetted file. AIRS QuickLook is a CGI/IDL combo package that allows users to view AIRS/HSB/AMSU Level-1B data online by specifying a channel prior to obtaining data. A global map is also provided along with the image to show geographic coverage of the granule and flight direction of the spacecraft. OASIS (Online data AnalySIS) is an IDL-based HTML/CGI interface for search, selection, and simple analysis of earth science data. It supports binary and GRIB formatted data, such as TOVS, Data Assimilation products, and some NCEP operational products. TRMM Online Analysis System is designed for quick exploration, analyses, and visualization of TRMM Level-3 and other precipitation products. The products consist of the daily (3B42), monthly(3B43), near-real-time (3B42RT), and Willmott's climate data. The system is also designed to be simple and easy to use - users can plot the average or accumulated rainfall over their region of interest for a given time period, or plot the time series of regional rainfall average. WebGIS is an online web software that implements the Open GIS Consortium (OGC) standards for mapping requests and rendering. It allows users access to TRMM, MODIS, SeaWiFS, and AVHRR data from several DAAC map servers, as well as externally served data such as political boundaries, population centers, lakes, rivers, and elevation. GES DAAC DESKTOP TOOLS: HDFLook-MODIS is a new, multifunctional, data processing and visualization tool for Radiometric and Geolocation, Atmosphere, Ocean, and Land MODIS HDF-EOS data. Features include (1) accessing and visualization of all swath (Levels l and 2) MODIS and AIRS products, and gridded (Levels 3 and 4) MODIS products; (2) re-mapping of swath data to world map; (3) geo-projection conversion; (4) interactive and batch mode capabilities; (5) subsetting and multi-granule processing; and (6) data conversion. SIMAP is an IDL-based script that is designed to read and map MODIS Level 1B (L1B) and Level 2 (L2) Ocean and Atmosphere products. It is a non-interactive, command line executed tool. The resulting maps are scaled to physical units (e.g., radiances, concentrations, brightness temperatures) and saved in binary files. TRMM HDF (in C and Fortran), reads in TRMM HDF data files and writes out user-selected SDS arrays and Vdata tables as separate flat binary files.

  3. Lightweight UDP Pervasive Protocol in Smart Home Environment Based on Labview

    NASA Astrophysics Data System (ADS)

    Kurniawan, Wijaya; Hannats Hanafi Ichsan, Mochammad; Rizqika Akbar, Sabriansyah; Arwani, Issa

    2017-04-01

    TCP (Transmission Control Protocol) technology in a reliable environment was not a problem, but not in an environment where the entire Smart Home network connected locally. Currently employing pervasive protocols using TCP technology, when data transmission is sent, it would be slower because they have to perform handshaking process in advance and could not broadcast the data. On smart home environment, it does not need large size and complex data transmission between monitoring site and monitoring center required in Smart home strain monitoring system. UDP (User Datagram Protocol) technology is quick and simple on data transmission process. UDP can broadcast messages because the UDP did not require handshaking and with more efficient memory usage. LabVIEW is a programming language software for processing and visualization of data in the field of data acquisition. This paper proposes to examine Pervasive UDP protocol implementations in smart home environment based on LabVIEW. UDP coded in LabVIEW and experiments were performed on a PC and can work properly.

  4. Development of an optoelectronic holographic platform for otolaryngology applications

    NASA Astrophysics Data System (ADS)

    Harrington, Ellery; Dobrev, Ivo; Bapat, Nikhil; Flores, Jorge Mauricio; Furlong, Cosme; Rosowski, John; Cheng, Jeffery Tao; Scarpino, Chris; Ravicz, Michael

    2010-08-01

    In this paper, we present advances on our development of an optoelectronic holographic computing platform with the ability to quantitatively measure full-field-of-view nanometer-scale movements of the tympanic membrane (TM). These measurements can facilitate otologists' ability to study and diagnose hearing disorders in humans. The holographic platform consists of a laser delivery system and an otoscope. The control software, called LaserView, is written in Visual C++ and handles communication and synchronization between hardware components. It provides a user-friendly interface to allow viewing of holographic images with several tools to automate holography-related tasks and facilitate hardware communication. The software uses a series of concurrent threads to acquire images, control the hardware, and display quantitative holographic data at video rates and in two modes of operation: optoelectronic holography and lensless digital holography. The holographic platform has been used to perform experiments on several live and post-mortem specimens, and is to be deployed in a medical research environment with future developments leading to its eventual clinical use.

  5. CuGene as a tool to view and explore genomic data

    NASA Astrophysics Data System (ADS)

    Haponiuk, Michał; Pawełkowicz, Magdalena; Przybecki, Zbigniew; Nowak, Robert M.

    2017-08-01

    Integrated CuGene is an easy-to-use, open-source, on-line tool that can be used to browse, analyze, and query genomic data and annotations. It places annotation tracks beneath genome coordinate positions, allowing rapid visual correlation of different types of information. It also allows users to upload and display their own experimental results or annotation sets. An important functionality of the application is a possibility to find similarity between sequences by applying four different algorithms of different accuracy. The presented tool was tested on real genomic data and is extensively used by Polish Consortium of Cucumber Genome Sequencing.

  6. In the Loop: The Organization of Team-Based Communication in a Patient-Centered Clinical Collaboration System.

    PubMed

    Kurahashi, Allison M; Weinstein, Peter B; Jamieson, Trevor; Stinson, Jennifer N; Cafazzo, Joseph A; Lokuge, Bhadra; Morita, Plinio P; Cohen, Eyal; Rapoport, Adam; Bezjak, Andrea; Husain, Amna

    2016-03-24

    We describe the development and evaluation of a secure Web-based system for the purpose of collaborative care called Loop. Loop assembles the team of care with the patient as an integral member of the team in a secure space. The objectives of this paper are to present the iterative design of the separate views for health care providers (HCPs) within each patient's secure space and examine patients', caregivers', and HCPs' perspectives on this separate view for HCP-only communication. The overall research program includes cycles of ethnography, prototyping, usability testing, and pilot testing. This paper describes the usability testing phase that directly informed development. A descriptive qualitative approach was used to analyze participant perspectives that emerged during usability testing. During usability testing, we sampled 89 participants from three user groups: 23 patients, 19 caregivers, and 47 HCPs. Almost all perspectives from the three user groups supported the need for an HCP-only communication view. In an earlier prototype, the visual presentation caused confusion among HCPs when reading and composing messages about whether a message was visible to the patient. Usability testing guided us to design a more deliberate distinction between posting in the Patient and Team view and the Health Care Provider Only view at the time of composing a message, which once posted is distinguished by an icon. The team made a decision to incorporate an HCP-only communication view based on findings during earlier phases of work. During usability testing we tested the separate communication views, and all groups supported this partition. We spent considerable effort designing the partition; however, preliminary findings from the next phase of evaluation, pilot testing, show that the Patient and Team communication is predominantly being used. This demonstrates the importance of a subsequent phase of the clinical trial of Loop to validate the concept and design.

  7. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations.

    PubMed

    Schroeder, David; Keefe, Daniel F

    2016-01-01

    We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.

  8. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less

  9. Automatic User Interface Generation for Visualizing Big Geoscience Data

    NASA Astrophysics Data System (ADS)

    Yu, H.; Wu, J.; Zhou, Y.; Tang, Z.; Kuo, K. S.

    2016-12-01

    Along with advanced computing and observation technologies, geoscience and its related fields have been generating a large amount of data at an unprecedented growth rate. Visualization becomes an increasingly attractive and feasible means for researchers to effectively and efficiently access and explore data to gain new understandings and discoveries. However, visualization has been challenging due to a lack of effective data models and visual representations to tackle the heterogeneity of geoscience data. We propose a new geoscience data visualization framework by leveraging the interface automata theory to automatically generate user interface (UI). Our study has the following three main contributions. First, geoscience data has its unique hierarchy data structure and complex formats, and therefore it is relatively easy for users to get lost or confused during their exploration of the data. By applying interface automata model to the UI design, users can be clearly guided to find the exact visualization and analysis that they want. In addition, from a development perspective, interface automaton is also easier to understand than conditional statements, which can simplify the development process. Second, it is common that geoscience data has discontinuity in its hierarchy structure. The application of interface automata can prevent users from suffering automation surprises, and enhance user experience. Third, for supporting a variety of different data visualization and analysis, our design with interface automata could also make applications become extendable in that a new visualization function or a new data group could be easily added to an existing application, which reduces the overhead of maintenance significantly. We demonstrate the effectiveness of our framework using real-world applications.

  10. Real Data and Rapid Results: Ocean Color Data Analysis with Giovanni (GES DISC Interactive Online Visualization and ANalysis Infrastructure)

    NASA Technical Reports Server (NTRS)

    Acker, J. G.; Leptoukh, G.; Kempler, S.; Gregg, W.; Berrick, S.; Zhu, T.; Liu, Z.; Rui, H.; Shen, S.

    2004-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) has taken a major step addressing the challenge of using archived Earth Observing System (EOS) data for regional or global studies by developing an infrastructure with a World Wide Web interface which allows online, interactive, data analysis: the GES DISC Interactive Online Visualization and ANalysis Infrastructure, or "Giovanni." Giovanni provides a data analysis environment that is largely independent of underlying data file format. The Ocean Color Time-Series Project has created an initial implementation of Giovanni using monthly Standard Mapped Image (SMI) data products from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) mission. Giovanni users select geophysical parameters, and the geographical region and time period of interest. The system rapidly generates a graphical or ASCII numerical data output. Currently available output options are: Area plot (averaged or accumulated over any available data period for any rectangular area); Time plot (time series averaged over any rectangular area); Hovmeller plots (image view of any longitude-time and latitude-time cross sections); ASCII output for all plot types; and area plot animations. Future plans include correlation plots, output formats compatible with Geographical Information Systems (GIs), and higher temporal resolution data. The Ocean Color Time-Series Project will produce sensor-independent ocean color data beginning with the Coastal Zone Color Scanner (CZCS) mission and extending through SeaWiFS and Moderate Resolution Imaging Spectroradiometer (MODIS) data sets, and will enable incorporation of Visible/lnfrared Imaging Radiometer Suite (VIIRS) data, which will be added to Giovanni. The first phase of Giovanni will also include tutorials demonstrating the use of Giovanni and collaborative assistance in the development of research projects using the SeaWiFS and Ocean Color Time-Series Project data in the online Laboratory for Ocean Color Users (LOCUS). The synergy of Giovanni with high-quality ocean color data provides users with the ability to investigate a variety of important oceanic phenomena, such as coastal primary productivity related to pelagic fisheries, seasonal patterns and interannual variability, interdependence of atmospheric dust aerosols and harmful algal blooms, and the potential effects of climate change on oceanic productivity.

  11. From Visual Exploration to Storytelling and Back Again.

    PubMed

    Gratzl, S; Lex, A; Gehlenborg, N; Cosgrove, N; Streit, M

    2016-06-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract).

  12. From Visual Exploration to Storytelling and Back Again

    PubMed Central

    Gratzl, S.; Lex, A.; Gehlenborg, N.; Cosgrove, N.; Streit, M.

    2016-01-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author “Vistories”, visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract) PMID:27942091

  13. Visual and somatic sensory feedback of brain activity for intuitive surgical robot manipulation.

    PubMed

    Miura, Satoshi; Matsumoto, Yuya; Kobayashi, Yo; Kawamura, Kazuya; Nakashima, Yasutaka; Fujie, Masakatsu G

    2015-01-01

    This paper presents a method to evaluate the hand-eye coordination of the master-slave surgical robot by measuring the activation of the intraparietal sulcus in users brain activity during controlling virtual manipulation. The objective is to examine the changes in activity of the intraparietal sulcus when the user's visual or somatic feedback is passed through or intercepted. The hypothesis is that the intraparietal sulcus activates significantly when both the visual and somatic sense pass feedback, but deactivates when either visual or somatic is intercepted. The brain activity of three subjects was measured by the functional near-infrared spectroscopic-topography brain imaging while they used a hand controller to move a virtual arm of a surgical simulator. The experiment was performed several times with three conditions: (i) the user controlled the virtual arm naturally under both visual and somatic feedback passed, (ii) the user moved with closed eyes under only somatic feedback passed, (iii) the user only gazed at the screen under only visual feedback passed. Brain activity showed significantly better control of the virtual arm naturally (p<;0.05) when compared with moving with closed eyes or only gazing among all participants. In conclusion, the brain can activate according to visual and somatic sensory feedback agreement.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murugesan, Sugeerth; Bouchard, Kristofer; Chang, Edward

    There exists a need for effective and easy-to-use software tools supporting the analysis of complex Electrocorticography (ECoG) data. Understanding how epileptic seizures develop or identifying diagnostic indicators for neurological diseases require the in-depth analysis of neural activity data from ECoG. Such data is multi-scale and is of high spatio-temporal resolution. Comprehensive analysis of this data should be supported by interactive visual analysis methods that allow a scientist to understand functional patterns at varying levels of granularity and comprehend its time-varying behavior. We introduce a novel multi-scale visual analysis system, ECoG ClusterFlow, for the detailed exploration of ECoG data. Our systemmore » detects and visualizes dynamic high-level structures, such as communities, derived from the time-varying connectivity network. The system supports two major views: 1) an overview summarizing the evolution of clusters over time and 2) an electrode view using hierarchical glyph-based design to visualize the propagation of clusters in their spatial, anatomical context. We present case studies that were performed in collaboration with neuroscientists and neurosurgeons using simulated and recorded epileptic seizure data to demonstrate our system's effectiveness. ECoG ClusterFlow supports the comparison of spatio-temporal patterns for specific time intervals and allows a user to utilize various clustering algorithms. Neuroscientists can identify the site of seizure genesis and its spatial progression during various the stages of a seizure. Our system serves as a fast and powerful means for the generation of preliminary hypotheses that can be used as a basis for subsequent application of rigorous statistical methods, with the ultimate goal being the clinical treatment of epileptogenic zones.« less

  15. An evaluation-guided approach for effective data visualization on tablets

    NASA Astrophysics Data System (ADS)

    Games, Peter S.; Joshi, Alark

    2015-01-01

    There is a rising trend of data analysis and visualization tasks being performed on a tablet device. Apps with interactive data visualization capabilities are available for a wide variety of domains. We investigate whether users grasp how to effectively interpret and interact with visualizations. We conducted a detailed user evaluation to study the abilities of individuals with respect to analyzing data on a tablet through an interactive visualization app. Based upon the results of the user evaluation, we find that most subjects performed well at understanding and interacting with simple visualizations, specifically tables and line charts. A majority of the subjects struggled with identifying interactive widgets, recognizing interactive widgets with overloaded functionality, and understanding visualizations which do not display data for sorted attributes. Based on our study, we identify guidelines for designers and developers of mobile data visualization apps that include recommendations for effective data representation and interaction.

  16. Real-time augmented reality overlay for an energy-efficient car study

    NASA Astrophysics Data System (ADS)

    Wozniak, Peter; Javahiraly, Nicolas; Curticapean, Dan

    2017-06-01

    Our university carries out various research projects. Among others, the project Schluckspecht is an interdisciplinary work on different ultra-efficient car concepts for international contests. Besides the engineering work, one part of the project deals with real-time data visualization. In order to increase the efficiency of the vehicle, an online monitoring of the runtime parameters is necessary. The driving parameters of the vehicle are transmitted to a processing station via a wireless network connection. We plan to use an augmented reality (AR) application to visualize different data on top of the view of the real car. By utilizing a mobile Android or iOS device a user can interactively view various real-time and statistical data. The car and its components are meant to be augmented by various additional information, whereby that information should appear at the correct position of the components. An engine e.g. could show the current rpm and consumption values. A battery on the other hand could show the current charge level. The goal of this paper is to evaluate different possible approaches, their suitability and to expand our application to other projects at our university.

  17. Moveable Feast: A Distributed-Data Case Study Engine for Yotc

    NASA Astrophysics Data System (ADS)

    Mapes, B. E.

    2014-12-01

    The promise of YOTC, a richly detailed global view of the tropical atmosphere and its processes down to 1/4 degree resolution, can now be attained without a lot of downloading and programming chores. Many YOTC datasets are served online: all the global reanalyses, including the YOTC-specific ECMWF 1/4 degree set, as well as satellite data including IR and TRMM 3B42. Data integration and visualization are easy with a new YOTC 'case study engine' in the free, all-platform, click-to-install Integrated Data Viewer (IDV) software from Unidata. All the dataset access points, along with many evocative and adjustable display layers, can be loaded with a single click (and then a few minutes wait), using the special YOTC bundle in the Mapes IDV collection (http://www.rsmas.miami.edu/users/bmapes/MapesIDVcollection.html). Time ranges can be adjusted with a calendar widget, and spatial subset regions can be selected with a shift-rubberband mouse operation. The talk will showcase visualizations of several YOTC weather events and process estimates, and give a view of how these and any other YOTC cases can be reproduced on any networked computer.

  18. Parametric-Studies and Data-Plotting Modules for the SOAP

    NASA Technical Reports Server (NTRS)

    2008-01-01

    "Parametric Studies" and "Data Table Plot View" are the names of software modules in the Satellite Orbit Analysis Program (SOAP). Parametric Studies enables parameterization of as many as three satellite or ground-station attributes across a range of values and computes the average, minimum, and maximum of a specified metric, the revisit time, or 21 other functions at each point in the parameter space. This computation produces a one-, two-, or three-dimensional table of data representing statistical results across the parameter space. Inasmuch as the output of a parametric study in three dimensions can be a very large data set, visualization is a paramount means of discovering trends in the data (see figure). Data Table Plot View enables visualization of the data table created by Parametric Studies or by another data source: this module quickly generates a display of the data in the form of a rotatable three-dimensional-appearing plot, making it unnecessary to load the SOAP output data into a separate plotting program. The rotatable three-dimensionalappearing plot makes it easy to determine which points in the parameter space are most desirable. Both modules provide intuitive user interfaces for ease of use.

  19. Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments.

    PubMed

    Andrews, T J; Coppola, D M

    1999-08-01

    Eye position was recorded in different viewing conditions to assess whether the temporal and spatial characteristics of saccadic eye movements in different individuals are idiosyncratic. Our aim was to determine the degree to which oculomotor control is based on endogenous factors. A total of 15 naive subjects viewed five visual environments: (1) The absence of visual stimulation (i.e. a dark room); (2) a repetitive visual environment (i.e. simple textured patterns); (3) a complex natural scene; (4) a visual search task; and (5) reading text. Although differences in visual environment had significant effects on eye movements, idiosyncrasies were also apparent. For example, the mean fixation duration and size of an individual's saccadic eye movements when passively viewing a complex natural scene covaried significantly with those same parameters in the absence of visual stimulation and in a repetitive visual environment. In contrast, an individual's spatio-temporal characteristics of eye movements during active tasks such as reading text or visual search covaried together, but did not correlate with the pattern of eye movements detected when viewing a natural scene, simple patterns or in the dark. These idiosyncratic patterns of eye movements in normal viewing reveal an endogenous influence on oculomotor control. The independent covariance of eye movements during different visual tasks shows that saccadic eye movements during active tasks like reading or visual search differ from those engaged during the passive inspection of visual scenes.

  20. View Combination: A Generalization Mechanism for Visual Recognition

    ERIC Educational Resources Information Center

    Friedman, Alinda; Waller, David; Thrash, Tyler; Greenauer, Nathan; Hodgson, Eric

    2011-01-01

    We examined whether view combination mechanisms shown to underlie object and scene recognition can integrate visual information across views that have little or no three-dimensional information at either the object or scene level. In three experiments, people learned four "views" of a two dimensional visual array derived from a three-dimensional…

Top