Integrating advanced visualization technology into the planetary Geoscience workflow
NASA Astrophysics Data System (ADS)
Huffman, John; Forsberg, Andrew; Loomis, Andrew; Head, James; Dickson, James; Fassett, Caleb
2011-09-01
Recent advances in computer visualization have allowed us to develop new tools for analyzing the data gathered during planetary missions, which is important, since these data sets have grown exponentially in recent years to tens of terabytes in size. As part of the Advanced Visualization in Solar System Exploration and Research (ADVISER) project, we utilize several advanced visualization techniques created specifically with planetary image data in mind. The Geoviewer application allows real-time active stereo display of images, which in aggregate have billions of pixels. The ADVISER desktop application platform allows fast three-dimensional visualization of planetary images overlain on digital terrain models. Both applications include tools for easy data ingest and real-time analysis in a programmatic manner. Incorporation of these tools into our everyday scientific workflow has proved important for scientific analysis, discussion, and publication, and enabled effective and exciting educational activities for students from high school through graduate school.
Molecules to maps: tools for visualization and interaction in support of computational biology.
Kraemer, E T; Ferrin, T E
1998-01-01
The volume of data produced by genome projects, X-ray crystallography, NMR spectroscopy, and electron and confocal microscopy present the bioinformatics community with new challenges for analyzing, understanding, and exchanging this data. At the 1998 Pacific Symposium on Biocomputing, a track entitled 'Molecules to Maps: Tools for Visualization and Interaction in Computational Biology' provided tool developers and users with the opportunity to discuss advances in tools and techniques to assist scientists in evaluating, absorbing, navigating, and correlating this sea of information, through visualization and user interaction. In this paper we present these advances and discuss some of the challenges that remain to be solved.
A graph algebra for scalable visual analytics.
Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V
2012-01-01
Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.
Interactive Tools for Measuring Visual Scanning Performance and Reaction Time
Seeanner, Julia; Hennessy, Sarah; Manganelli, Joseph; Crisler, Matthew; Rosopa, Patrick; Jenkins, Casey; Anderson, Michael; Drouin, Nathalie; Belle, Leah; Truesdail, Constance; Tanner, Stephanie
2017-01-01
Occupational therapists are constantly searching for engaging, high-technology interactive tasks that provide immediate feedback to evaluate and train clients with visual scanning deficits. This study examined the relationship between two tools: the VISION COACH™ interactive light board and the Functional Object Detection© (FOD) Advanced driving simulator scenario. Fifty-four healthy drivers, ages 21–66 yr, were divided into three age groups. Participants performed braking response and visual target (E) detection tasks of the FOD Advanced driving scenario, followed by two sets of three trials using the VISION COACH Full Field 60 task. Results showed no significant effect of age on FOD Advanced performance but a significant effect of age on VISION COACH performance. Correlations showed that participants’ performance on both braking and E detection tasks were significantly positively correlated with performance on the VISION COACH (.37 < r < .40, p < .01). These tools provide new options for therapists. PMID:28218598
A Comparison of Satellite Conjunction Analysis Screening Tools
2011-09-01
visualization tool. Version 13.1.4 for Linux was tested. The SOAP conjunction analysis function does not have the capacity to perform the large...was examined by SOAP to confirm the conjunction. STK Advanced CAT STK Advanced CAT (Conjunction Analysis Tools) is an add-on module for the STK ...run with each tool. When attempting to perform the seven day all vs all analysis with STK Advanced CAT, the program consistently crashed during report
Human Factors Evaluation of Advanced Electric Power Grid Visualization Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greitzer, Frank L.; Dauenhauer, Peter M.; Wierks, Tamara G.
This report describes initial human factors evaluation of four visualization tools (Graphical Contingency Analysis, Force Directed Graphs, Phasor State Estimator and Mode Meter/ Mode Shapes) developed by PNNL, and proposed test plans that may be implemented to evaluate their utility in scenario-based experiments.
Visual Information for the Desktop, version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2006-03-29
VZIN integrates visual analytics capabilities into popular desktop tools to aid a user in searching and understanding an information space. VZIN allows users to Drag-Drop-Visualize-Explore-Organize information within tools such as Microsoft Office, Windows Explorer, Excel, and Outlook. VZIN is tailorable to specific client or industry requirements. VZIN follows the desktop metaphors so that advanced analytical capabilities are available with minimal user training.
Interactive Tools for Measuring Visual Scanning Performance and Reaction Time.
Brooks, Johnell; Seeanner, Julia; Hennessy, Sarah; Manganelli, Joseph; Crisler, Matthew; Rosopa, Patrick; Jenkins, Casey; Anderson, Michael; Drouin, Nathalie; Belle, Leah; Truesdail, Constance; Tanner, Stephanie
Occupational therapists are constantly searching for engaging, high-technology interactive tasks that provide immediate feedback to evaluate and train clients with visual scanning deficits. This study examined the relationship between two tools: the VISION COACH™ interactive light board and the Functional Object Detection © (FOD) Advanced driving simulator scenario. Fifty-four healthy drivers, ages 21-66 yr, were divided into three age groups. Participants performed braking response and visual target (E) detection tasks of the FOD Advanced driving scenario, followed by two sets of three trials using the VISION COACH Full Field 60 task. Results showed no significant effect of age on FOD Advanced performance but a significant effect of age on VISION COACH performance. Correlations showed that participants' performance on both braking and E detection tasks were significantly positively correlated with performance on the VISION COACH (.37 < r < .40, p < .01). These tools provide new options for therapists. Copyright © 2017 by the American Occupational Therapy Association, Inc.
Visualization Skills: A Prerequisite to Advanced Solid Modeling
ERIC Educational Resources Information Center
Gow, George
2007-01-01
Many educators believe that solid modeling software has made teaching two- and three-dimensional visualization skills obsolete. They claim that the visual tools built into the solid modeling software serve as a replacement for the CAD operator's personal visualization skills. They also claim that because solid modeling software can produce…
ERIC Educational Resources Information Center
Tuvi-Arad, Inbal; Blonder, Ron
2010-01-01
In this paper we describe the learning process of a group of experienced chemistry teachers in a specially designed workshop on molecular symmetry and continuous symmetry. The workshop was based on interactive visualization tools that allow molecules and their symmetry elements to be rotated in three dimensions. The topic of continuous symmetry is…
ERIC Educational Resources Information Center
Rogness, Jonathan
2011-01-01
Advances in computer graphics have provided mathematicians with the ability to create stunning visualizations, both to gain insight and to help demonstrate the beauty of mathematics to others. As educators these tools can be particularly important as we search for ways to work with students raised with constant visual stimulation, from video games…
NASA Astrophysics Data System (ADS)
Ellins, K. K.; Eriksson, S. C.; Samsel, F.; Lavier, L.
2017-12-01
A new undergraduate, upper level geoscience course was developed and taught by faculty and staff of the UT Austin Jackson School of Geosciences, the Center for Agile Technology, and the Texas Advanced Computational Center. The course examined the role of the visual arts in placing the scientific process and knowledge in a broader context and introduced students to innovations in the visual arts that promote scientific investigation through collaboration between geoscientists and artists. The course addressed (1) the role of the visual arts in teaching geoscience concepts and promoting geoscience learning; (2) the application of innovative visualization and artistic techniques to large volumes of geoscience data to enhance scientific understanding and to move scientific investigation forward; and (3) the illustrative power of art to communicate geoscience to the public. In-class activities and discussions, computer lab instruction on the application of Paraview software, reading assignments, lectures, and group projects with presentations comprised the two-credit, semester-long "special topics" course, which was taken by geoscience, computer science, and engineering students. Assessment of student learning was carried out by the instructors and course evaluation was done by an external evaluator using rubrics, likert-scale surveys and focus goups. The course achieved its goals of students' learning the concepts and techniques of the visual arts. The final projects demonstrated this, along with the communication of geologic concepts using what they had learned in the course. The basic skill of sketching for learning and using best practices in visual communication were used extensively and, in most cases, very effectively. The use of an advanced visualization tool, Paraview, was received with mixed reviews because of the lack of time to really learn the tool and the fact that it is not a tool used routinely in geoscience. Those senior students with advanced computer skills saw the importance of this tool. Students worked in teams, more or less effectively, and made suggestions for improving future offerings of the course.
Intelligent video storage of visual evidences on site in fast deployment
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Bastide, Arnaud; Delaigle, Jean-Francois
2004-07-01
In this article we present a generic, flexible, scalable and robust approach for an intelligent real-time forensic visual system. The proposed implementation could be rapidly deployable and integrates minimum logistic support as it embeds low complexity devices (PCs and cameras) that communicate through wireless network. The goal of these advanced tools is to provide intelligent video storage of potential video evidences for fast intervention during deployment around a hazardous sector after a terrorism attack, a disaster, an air crash or before attempt of it. Advanced video analysis tools, such as segmentation and tracking are provided to support intelligent storage and annotation.
Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha
2016-05-01
A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.
Advances in bioluminescence imaging: new probes from old recipes.
Yao, Zi; Zhang, Brendan S; Prescher, Jennifer A
2018-06-04
Bioluminescent probes are powerful tools for visualizing biology in live tissues and whole animals. Recent years have seen a surge in the number of new luciferases, luciferins, and related tools available for bioluminescence imaging. Many were crafted using classic methods of optical probe design and engineering. Here we highlight recent advances in bioluminescent tool discovery and development, along with applications of the probes in cells, tissues, and organisms. Collectively, these tools are improving in vivo imaging capabilities and bolstering new research directions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Richard M. DeGraaf; Anna M. Lester; Mariko Yamasaki; William B. Leak
2007-01-01
Visualization is a powerful tool for depicting projections of forest structure and landscape conditions, for communicating habitat management practices, and for providing a landscape context to private landowners and to those concerned with public land management. Recent advances in visualization technology, especially in graphics quality, ease of use, and relative...
Igloo-Plot: a tool for visualization of multidimensional datasets.
Kuntal, Bhusan K; Ghosh, Tarini Shankar; Mande, Sharmila S
2014-01-01
Advances in science and technology have resulted in an exponential growth of multivariate (or multi-dimensional) datasets which are being generated from various research areas especially in the domain of biological sciences. Visualization and analysis of such data (with the objective of uncovering the hidden patterns therein) is an important and challenging task. We present a tool, called Igloo-Plot, for efficient visualization of multidimensional datasets. The tool addresses some of the key limitations of contemporary multivariate visualization and analysis tools. The visualization layout, not only facilitates an easy identification of clusters of data-points having similar feature compositions, but also the 'marker features' specific to each of these clusters. The applicability of the various functionalities implemented herein is demonstrated using several well studied multi-dimensional datasets. Igloo-Plot is expected to be a valuable resource for researchers working in multivariate data mining studies. Igloo-Plot is available for download from: http://metagenomics.atc.tcs.com/IglooPlot/. Copyright © 2014 Elsevier Inc. All rights reserved.
Development of a Geometric Spatial Visualization Tool
ERIC Educational Resources Information Center
Ganesh, Bibi; Wilhelm, Jennifer; Sherrod, Sonya
2009-01-01
This paper documents the development of the Geometric Spatial Assessment. We detail the development of this instrument which was designed to identify middle school students' strategies and advancement in understanding of four geometric concept domains (geometric spatial visualization, spatial projection, cardinal directions, and periodic patterns)…
Icarus: visualizer for de novo assembly evaluation.
Mikheenko, Alla; Valin, Gleb; Prjibelski, Andrey; Saveliev, Vladislav; Gurevich, Alexey
2016-11-01
: Data visualization plays an increasingly important role in NGS data analysis. With advances in both sequencing and computational technologies, it has become a new bottleneck in genomics studies. Indeed, evaluation of de novo genome assemblies is one of the areas that can benefit from the visualization. However, even though multiple quality assessment methods are now available, existing visualization tools are hardly suitable for this purpose. Here, we present Icarus-a novel genome visualizer for accurate assessment and analysis of genomic draft assemblies, which is based on the tool QUAST. Icarus can be used in studies where a related reference genome is available, as well as for non-model organisms. The tool is available online and as a standalone application. http://cab.spbu.ru/software/icarus CONTACT: aleksey.gurevich@spbu.ruSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
OR2020: The Operating Room of the Future
2004-05-01
25 3.3 Technical Requirements: Standards and Tools for Improved Operating R oom Process Integration...Image processing and visualization tools must be made available to the operating room. 5. Communications issues must be addressed and aim toward...protocols for effectively performing advanced surgeries and using telecommunications-ready tools as needed. The following recommendations were made
Contextual signals in visual cortex.
Khan, Adil G; Hofer, Sonja B
2018-06-05
Vision is an active process. What we perceive strongly depends on our actions, intentions and expectations. During visual processing, these internal signals therefore need to be integrated with the visual information from the retina. The mechanisms of how this is achieved by the visual system are still poorly understood. Advances in recording and manipulating neuronal activity in specific cell types and axonal projections together with tools for circuit tracing are beginning to shed light on the neuronal circuit mechanisms of how internal, contextual signals shape sensory representations. Here we review recent work, primarily in mice, that has advanced our understanding of these processes, focusing on contextual signals related to locomotion, behavioural relevance and predictions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT
NASA Technical Reports Server (NTRS)
Maxwell, Thomas
2012-01-01
Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.
Stereoscopic applications for design visualization
NASA Astrophysics Data System (ADS)
Gilson, Kevin J.
2007-02-01
Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.
An Update on Design Tools for Optimization of CMC 3D Fiber Architectures
NASA Technical Reports Server (NTRS)
Lang, J.; DiCarlo, J.
2012-01-01
Objective: Describe and up-date progress for NASA's efforts to develop 3D architectural design tools for CMC in general and for SIC/SiC composites in particular. Describe past and current sequential work efforts aimed at: Understanding key fiber and tow physical characteristics in conventional 2D and 3D woven architectures as revealed by microstructures in the literature. Developing an Excel program for down-selecting and predicting key geometric properties and resulting key fiber-controlled properties for various conventional 3D architectures. Developing a software tool for accurately visualizing all the key geometric details of conventional 3D architectures. Validating tools by visualizing and predicting the Internal geometry and key mechanical properties of a NASA SIC/SIC panel with a 3D orthogonal architecture. Applying the predictive and visualization tools toward advanced 3D orthogonal SiC/SIC composites, and combining them into a user-friendly software program.
Freiburg RNA tools: a central online resource for RNA-focused research and teaching.
Raden, Martin; Ali, Syed M; Alkhnbashi, Omer S; Busch, Anke; Costa, Fabrizio; Davis, Jason A; Eggenhofer, Florian; Gelhausen, Rick; Georg, Jens; Heyne, Steffen; Hiller, Michael; Kundu, Kousik; Kleinkauf, Robert; Lott, Steffen C; Mohamed, Mostafa M; Mattheis, Alexander; Miladi, Milad; Richter, Andreas S; Will, Sebastian; Wolff, Joachim; Wright, Patrick R; Backofen, Rolf
2018-05-21
The Freiburg RNA tools webserver is a well established online resource for RNA-focused research. It provides a unified user interface and comprehensive result visualization for efficient command line tools. The webserver includes RNA-RNA interaction prediction (IntaRNA, CopraRNA, metaMIR), sRNA homology search (GLASSgo), sequence-structure alignments (LocARNA, MARNA, CARNA, ExpaRNA), CRISPR repeat classification (CRISPRmap), sequence design (antaRNA, INFO-RNA, SECISDesign), structure aberration evaluation of point mutations (RaSE), and RNA/protein-family models visualization (CMV), and other methods. Open education resources offer interactive visualizations of RNA structure and RNA-RNA interaction prediction as well as basic and advanced sequence alignment algorithms. The services are freely available at http://rna.informatik.uni-freiburg.de.
Web-based Visual Analytics for Extreme Scale Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Evans, Katherine J; Harney, John F
In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less
Soil Science in the Digital Age
ERIC Educational Resources Information Center
High, Vance; VanHorn, Laura
2012-01-01
With the pervasiveness of digital technology, elementary students almost instinctively begin inquiry-based instruction with a bias. Visual information from digital devices competes with elementary science inquiry. To counteract this effect, teachers can use advance organizers. The advance organizer is a tool or a mental learning aid to help…
Interactive 3D visualization for theoretical virtual observatories
NASA Astrophysics Data System (ADS)
Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.
2018-06-01
Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.
Advanced engineering environment collaboration project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamph, Jane Ann; Pomplun, Alan R.; Kiba, Grant W.
2008-12-01
The Advanced Engineering Environment (AEE) is a model for an engineering design and communications system that will enhance project collaboration throughout the nuclear weapons complex (NWC). Sandia National Laboratories and Parametric Technology Corporation (PTC) worked together on a prototype project to evaluate the suitability of a portion of PTC's Windchill 9.0 suite of data management, design and collaboration tools as the basis for an AEE. The AEE project team implemented Windchill 9.0 development servers in both classified and unclassified domains and used them to test and evaluate the Windchill tool suite relative to the needs of the NWC using weaponsmore » project use cases. A primary deliverable was the development of a new real time collaborative desktop design and engineering process using PDMLink (data management tool), Pro/Engineer (mechanical computer aided design tool) and ProductView Lite (visualization tool). Additional project activities included evaluations of PTC's electrical computer aided design, visualization, and engineering calculations applications. This report documents the AEE project work to share information and lessons learned with other NWC sites. It also provides PTC with recommendations for improving their products for NWC applications.« less
Applications of the pipeline environment for visual informatics and genomics computations
2011-01-01
Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102
Using RSVP for analyzing state and previous activities for the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Cooper, Brian K.; Hartman, Frank; Maxwell, Scott; Wright, John; Yen, Jeng
2004-01-01
Current developments in immersive environments for mission planning include several tools which make up a system for performing and rehearsing missions. This system, known as the Rover Sequencing and Visualization Program (RSVP), includes tools for planning long range sorties for highly autonomous rovers, tools for planning operations with robotic arms, and advanced tools for visualizing telemetry from remote spacecraft and landers. One of the keys to successful planning of rover activities is knowing what the rover has accomplished to date and understanding the current rover state. RSVP builds on the lessons learned and the heritage of the Mars Pathfinder mission This paper will discuss the tools and methodologies present in the RSVP suite for examining rover state, reviewing previous activities, visually comparing telemetered results to rehearsed results, and reviewing science and engineering imagery. In addition we will present how this tool suite was used on the Mars Exploration Rovers (MER) project to explore the surface of Mars.
Industrial Inspection with Open Eyes: Advance with Machine Vision Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Niel, Kurt
Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less
Image-Enabled Discourse: Investigating the Creation of Visual Information as Communicative Practice
ERIC Educational Resources Information Center
Snyder, Jaime
2012-01-01
Anyone who has clarified a thought or prompted a response during a conversation by drawing a picture has exploited the potential of image making as an interactive tool for conveying information. Images are increasingly ubiquitous in daily communication, in large part due to advances in visually enabled information and communication technologies…
Blended Learning in the Visual Communications Classroom: Student Reflections on a Multimedia Course
ERIC Educational Resources Information Center
George-Palilonis, Jennifer; Filak, Vincent
2009-01-01
Advances in digital technology and a rapidly evolving media landscape continue to dramatically change teaching and learning. Among these changes is the emergence of multimedia teaching and learning tools, online degree programs, and hybrid classes that blend traditional and digital content delivery. At the same time, visual communication programs…
Multimodal visualization interface for data management, self-learning and data presentation.
Van Sint Jan, S; Demondion, X; Clapworthy, G; Louryan, S; Rooze, M; Cotten, A; Viceconti, M
2006-10-01
A multimodal visualization software, called the Data Manager (DM), has been developed to increase interdisciplinary communication around the topic of visualization and modeling of various aspects of the human anatomy. Numerous tools used in Radiology are integrated in the interface that runs on standard personal computers. The available tools, combined to hierarchical data management and custom layouts, allow analyzing of medical imaging data using advanced features outside radiological premises (for example, for patient review, conference presentation or tutorial preparation). The system is free, and based on an open-source software development architecture, and therefore updates of the system for custom applications are possible.
Data Visualization in Sociology
Healy, Kieran; Moody, James
2014-01-01
Visualizing data is central to social scientific work. Despite a promising early beginning, sociology has lagged in the use of visual tools. We review the history and current state of visualization in sociology. Using examples throughout, we discuss recent developments in ways of seeing raw data and presenting the results of statistical modeling. We make a general distinction between those methods and tools designed to help explore datasets, and those designed to help present results to others. We argue that recent advances should be seen as part of a broader shift towards easier sharing of the code and data both between researchers and with wider publics, and encourage practitioners and publishers to work toward a higher and more consistent standard for the graphical display of sociological insights. PMID:25342872
Real-Time Aerodynamic Flow and Data Visualization in an Interactive Virtual Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2005-01-01
Significant advances have been made to non-intrusive flow field diagnostics in the past decade. Camera based techniques are now capable of determining physical qualities such as surface deformation, surface pressure and temperature, flow velocities, and molecular species concentration. In each case, extracting the pertinent information from the large volume of acquired data requires powerful and efficient data visualization tools. The additional requirement for real time visualization is fueled by an increased emphasis on minimizing test time in expensive facilities. This paper will address a capability titled LiveView3D, which is the first step in the development phase of an in depth, real time data visualization and analysis tool for use in aerospace testing facilities.
Applications of image processing and visualization in the evaluation of murder and assault
NASA Astrophysics Data System (ADS)
Oliver, William R.; Rosenman, Julian G.; Boxwala, Aziz; Stotts, David; Smith, John; Soltys, Mitchell; Symon, James; Cullip, Tim; Wagner, Glenn
1994-09-01
Recent advances in image processing and visualization are of increasing use in the investigation of violent crime. The Digital Image Processing Laboratory at the Armed Forces Institute of Pathology in collaboration with groups at the University of North Carolina at Chapel Hill are actively exploring visualization applications including image processing of trauma images, 3D visualization, forensic database management and telemedicine. Examples of recent applications are presented. Future directions of effort include interactive consultation and image manipulation tools for forensic data exploration.
Stereoscopic display of 3D models for design visualization
NASA Astrophysics Data System (ADS)
Gilson, Kevin J.
2006-02-01
Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.
Panoramic-image-based rendering solutions for visualizing remote locations via the web
NASA Astrophysics Data System (ADS)
Obeysekare, Upul R.; Egts, David; Bethmann, John
2000-05-01
With advances in panoramic image-based rendering techniques and the rapid expansion of web advertising, new techniques are emerging for visualizing remote locations on the WWW. Success of these techniques depends on how easy and inexpensive it is to develop a new type of web content that provides pseudo 3D visualization at home, 24-hours a day. Furthermore, the acceptance of this new visualization medium depends on the effectiveness of the familiarization tools by a segment of the population that was never exposed to this type of visualization. This paper addresses various hardware and software solutions available to collect, produce, and view panoramic content. While cost and effectiveness of building the content is being addressed using a few commercial hardware solutions, effectiveness of familiarization tools is evaluated using a few sample data sets.
Rasheed, Waqas; Neoh, Yee Yik; Bin Hamid, Nor Hisham; Reza, Faruque; Idris, Zamzuri; Tang, Tong Boon
2017-10-01
Functional neuroimaging modalities play an important role in deciding the diagnosis and course of treatment of neuronal dysfunction and degeneration. This article presents an analytical tool with visualization by exploiting the strengths of the MEG (magnetoencephalographic) neuroimaging technique. The tool automates MEG data import (in tSSS format), channel information extraction, time/frequency decomposition, and circular graph visualization (connectogram) for simple result inspection. For advanced users, the tool also provides magnitude squared coherence (MSC) values allowing personalized threshold levels, and the computation of default model from MEG data of control population. Default model obtained from healthy population data serves as a useful benchmark to diagnose and monitor neuronal recovery during treatment. The proposed tool further provides optional labels with international 10-10 system nomenclature in order to facilitate comparison studies with EEG (electroencephalography) sensor space. Potential applications in epilepsy and traumatic brain injury studies are also discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Photovoice as a Teaching Tool: Learning by Doing with Visual Methods
ERIC Educational Resources Information Center
Schell, Kara; Ferguson, Alana; Hamoline, Rita; Shea, Jennifer; Thomas-MacLean, Roanne
2009-01-01
There has been a lack of research done on in-class teaching and learning using visual methods. The purpose of this article is to demonstrate an enriched teaching and learning experience, facilitated by a Photovoice project, in an Advanced Methodology class where sociology graduate students were exposed to various social research methods and…
Lighting Studies for Fuelling Machine Deployed Visual Inspection Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoots, Carl; Griffith, George
2015-04-01
Under subcontract to James Fisher Nuclear, Ltd., INL has been reviewing advanced vision systems for inspection of graphite in high radiation, high temperature, and high pressure environments. INL has performed calculations and proof-of-principle measurements of optics and lighting techniques to be considered for visual inspection of graphite fuel channels in AGR reactors in UK.
Incorporating Assistive Technology for Students with Visual Impairments into the Music Classroom
ERIC Educational Resources Information Center
Rush, Toby W.
2015-01-01
Although recent advances make it easier than ever before for students with severe visual impairments to be fully accommodated in the music classroom, one of the most significant current challenges in this area is most music educators' unfamiliarity with current assistive technology. Fortunately, many of these tools are readily available and even…
iGlobe Interactive Visualization and Analysis of Spatial Data
NASA Technical Reports Server (NTRS)
Hogan, Patrick
2012-01-01
iGlobe is open-source software built on NASA World Wind virtual globe technology. iGlobe provides a growing set of tools for weather science, climate research, and agricultural analysis. Up until now, these types of sophisticated tools have been developed in isolation by national agencies, academic institutions, and research organizations. By providing an open-source solution to analyze and visualize weather, climate, and agricultural data, the scientific and research communities can more readily advance solutions needed to understand better the dynamics of our home planet, Earth
ERIC Educational Resources Information Center
de Castro, Christopher H.
2011-01-01
This study explored the development of student's conceptual understandings of limit and derivative when utilizing specifically designed computational tools. Fourteen students from a secondary Advanced Placement Calculus AB course learned and explored the limit and derivative concepts from differential calculus using visualization tools in the…
MemAxes: Visualization and Analytics for Characterizing Complex Memory Performance Behaviors.
Gimenez, Alfredo; Gamblin, Todd; Jusufi, Ilir; Bhatele, Abhinav; Schulz, Martin; Bremer, Peer-Timo; Hamann, Bernd
2018-07-01
Memory performance is often a major bottleneck for high-performance computing (HPC) applications. Deepening memory hierarchies, complex memory management, and non-uniform access times have made memory performance behavior difficult to characterize, and users require novel, sophisticated tools to analyze and optimize this aspect of their codes. Existing tools target only specific factors of memory performance, such as hardware layout, allocations, or access instructions. However, today's tools do not suffice to characterize the complex relationships between these factors. Further, they require advanced expertise to be used effectively. We present MemAxes, a tool based on a novel approach for analytic-driven visualization of memory performance data. MemAxes uniquely allows users to analyze the different aspects related to memory performance by providing multiple visual contexts for a centralized dataset. We define mappings of sampled memory access data to new and existing visual metaphors, each of which enabling a user to perform different analysis tasks. We present methods to guide user interaction by scoring subsets of the data based on known performance problems. This scoring is used to provide visual cues and automatically extract clusters of interest. We designed MemAxes in collaboration with experts in HPC and demonstrate its effectiveness in case studies.
A survey of tools and resources for the next generation analyst
NASA Astrophysics Data System (ADS)
Hall, David L.; Graham, Jake; Catherman, Emily
2015-05-01
We have previously argued that a combination of trends in information technology (IT) and changing habits of people using IT provide opportunities for the emergence of a new generation of analysts that can perform effective intelligence, surveillance and reconnaissance (ISR) on a "do it yourself" (DIY) or "armchair" approach (see D.L. Hall and J. Llinas (2014)). Key technology advances include: i) new sensing capabilities including the use of micro-scale sensors and ad hoc deployment platforms such as commercial drones, ii) advanced computing capabilities in mobile devices that allow advanced signal and image processing and modeling, iii) intelligent interconnections due to advances in "web N" capabilities, and iv) global interconnectivity and increasing bandwidth. In addition, the changing habits of the digital natives reflect new ways of collecting and reporting information, sharing information, and collaborating in dynamic teams. This paper provides a survey and assessment of tools and resources to support this emerging analysis approach. The tools range from large-scale commercial tools such as IBM i2 Analyst Notebook, Palantir, and GeoSuite to emerging open source tools such as GeoViz and DECIDE from university research centers. The tools include geospatial visualization tools, social network analysis tools and decision aids. A summary of tools is provided along with links to web sites for tool access.
McIDAS-V: A Data Analysis and Visualization Tool for Global Satellite Data
NASA Astrophysics Data System (ADS)
Achtor, T. H.; Rink, T. D.
2011-12-01
The Man-computer Interactive Data Access System (McIDAS-V) is a java-based, open-source, freely available system for scientists, researchers and algorithm developers working with atmospheric data. The McIDAS-V software tools provide powerful new data manipulation and visualization capabilities, including 4-dimensional displays, an abstract data model with integrated metadata, user defined computation, and a powerful scripting capability. As such, McIDAS-V is a valuable tool for scientists and researchers within the GEO and GOESS domains. The advancing polar and geostationary orbit environmental satellite missions conducted by several countries will carry advanced instrumentation and systems that will collect and distribute land, ocean, and atmosphere data. These systems provide atmospheric and sea surface temperatures, humidity sounding, cloud and aerosol properties, and numerous other environmental products. This presentation will display and demonstrate some of the capabilities of McIDAS-V to analyze and display high temporal and spectral resolution data using examples from international environmental satellites.
Chapter 16: Lignin Visualization: Advanced Microscopy Techniques for Lignin Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Yining; Donohoe, Bryon S
Visualization of lignin in plant cell walls, with both spatial and chemical resolution, is emerging as an important tool to understand lignin's role in the plant cell wall's nanoscale architecture and to understand and design processes intended to modify the lignin. As such, this chapter reviews recent advances in advanced imaging methods with respect to lignin in plant cell walls. This review focuses on the importance of lignin detection and localization for studies in both plant biology and biotechnology. Challenges going forward to identify and delineate lignin from other plant cell wall components and to quantitatively analyze lignin in wholemore » cell walls from native plant tissue and treated biomass are also discussed.« less
Computing Advances Enable More Efficient Buildings - Continuum Magazine |
-and to visualize and react to the data that's available to them," says Brackney. One example of to tell occupants when to open and close their windows. A screen shot of the Building Agent, showing it has been launched. One of the visualization tools is a floor plan that shows which parts of the
ERIC Educational Resources Information Center
Yammine, Kaissar; Violato, Claudio
2015-01-01
Many medical graduates are deficient in anatomy knowledge and perhaps below the standards for safe medical practice. Three-dimensional visualization technology (3DVT) has been advanced as a promising tool to enhance anatomy knowledge. The purpose of this review is to conduct a meta-analysis of the effectiveness of 3DVT in teaching and learning…
Open source bioimage informatics for cell biology.
Swedlow, Jason R; Eliceiri, Kevin W
2009-11-01
Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery.
Visualizing time-related data in biology, a review
Secrier, Maria; Schneider, Reinhard
2014-01-01
Time is of the essence in biology as in so much else. For example, monitoring disease progression or the timing of developmental defects is important for the processes of drug discovery and therapy trials. Furthermore, an understanding of the basic dynamics of biological phenomena that are often strictly time regulated (e.g. circadian rhythms) is needed to make accurate inferences about the evolution of biological processes. Recent advances in technologies have enabled us to measure timing effects more accurately and in more detail. This has driven related advances in visualization and analysis tools that try to effectively exploit this data. Beyond timeline plots, notable attempts at more involved temporal interpretation have been made in recent years, but awareness of the available resources is still limited within the scientific community. Here, we review some advances in biological visualization of time-driven processes and consider how they aid data analysis and interpretation. PMID:23585583
Pal, P; Kumar, R; Srivastava, N; Chaudhuri, J
2014-02-01
A Visual Basic simulation software (WATTPPA) has been developed to analyse the performance of an advanced wastewater treatment plant. This user-friendly and menu-driven software is based on the dynamic mathematical model for an industrial wastewater treatment scheme that integrates chemical, biological and membrane-based unit operations. The software-predicted results corroborate very well with the experimental findings as indicated in the overall correlation coefficient of the order of 0.99. The software permits pre-analysis and manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. It allows quick performance analysis of the whole system as well as the individual units. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for hazardous wastewater.
A framework for interactive visualization of digital medical images.
Koehring, Andrew; Foo, Jung Leng; Miyano, Go; Lobe, Thom; Winer, Eliot
2008-10-01
The visualization of medical images obtained from scanning techniques such as computed tomography and magnetic resonance imaging is a well-researched field. However, advanced tools and methods to manipulate these data for surgical planning and other tasks have not seen widespread use among medical professionals. Radiologists have begun using more advanced visualization packages on desktop computer systems, but most physicians continue to work with basic two-dimensional grayscale images or not work directly with the data at all. In addition, new display technologies that are in use in other fields have yet to be fully applied in medicine. It is our estimation that usability is the key aspect in keeping this new technology from being more widely used by the medical community at large. Therefore, we have a software and hardware framework that not only make use of advanced visualization techniques, but also feature powerful, yet simple-to-use, interfaces. A virtual reality system was created to display volume-rendered medical models in three dimensions. It was designed to run in many configurations, from a large cluster of machines powering a multiwalled display down to a single desktop computer. An augmented reality system was also created for, literally, hands-on interaction when viewing models of medical data. Last, a desktop application was designed to provide a simple visualization tool, which can be run on nearly any computer at a user's disposal. This research is directed toward improving the capabilities of medical professionals in the tasks of preoperative planning, surgical training, diagnostic assistance, and patient education.
Interactive visualization of vegetation dynamics
Reed, B.C.; Swets, D.; Bard, L.; Brown, J.; Rowland, James
2001-01-01
Satellite imagery provides a mechanism for observing seasonal dynamics of the landscape that have implications for near real-time monitoring of agriculture, forest, and range resources. This study illustrates a technique for visualizing timely information on key events during the growing season (e.g., onset, peak, duration, and end of growing season), as well as the status of the current growing season with respect to the recent historical average. Using time-series analysis of normalized difference vegetation index (NDVI) data from the advanced very high resolution radiometer (AVHRR) satellite sensor, seasonal dynamics can be derived. We have developed a set of Java-based visualization and analysis tools to make comparisons between the seasonal dynamics of the current year with those from the past twelve years. In addition, the visualization tools allow the user to query underlying databases such as land cover or administrative boundaries to analyze the seasonal dynamics of areas of their own interest. The Java-based tools (data exploration and visualization analysis or DEVA) use a Web-based client-server model for processing the data. The resulting visualization and analysis, available via the Internet, is of value to those responsible for land management decisions, resource allocation, and at-risk population targeting.
MemAxes Visualization Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardware advancements such as Intel's PEBS and AMD's IBS, as well as software developments such as the perf_event API in Linux have made available the acquisition of memory access samples with performance information. MemAxes is a visualization and analysis tool for memory access sample data. By mapping the samples to their associated code, variables, node topology, and application dataset, MemAxes provides intuitive views of the data.
Visual Communications And Image Processing
NASA Astrophysics Data System (ADS)
Hsing, T. Russell; Tzou, Kou-Hu
1989-07-01
This special issue on Visual Communications and Image Processing contains 14 papers that cover a wide spectrum in this fast growing area. For the past few decades, researchers and scientists have devoted their efforts to these fields. Through this long-lasting devotion, we witness today the growing popularity of low-bit-rate video as a convenient tool for visual communication. We also see the integration of high-quality video into broadband digital networks. Today, with more sophisticated processing, clearer and sharper pictures are being restored from blurring and noise. Also, thanks to the advances in digital image processing, even a PC-based system can be built to recognize highly complicated Chinese characters at the speed of 300 characters per minute. This special issue can be viewed as a milestone of visual communications and image processing on its journey to eternity. It presents some overviews on advanced topics as well as some new development in specific subjects.
Open source bioimage informatics for cell biology
Swedlow, Jason R.; Eliceiri, Kevin W.
2009-01-01
Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery. PMID:19833518
A Data-Driven Approach to Interactive Visualization of Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jun
Driven by emerging industry standards, electric utilities and grid coordination organizations are eager to seek advanced tools to assist grid operators to perform mission-critical tasks and enable them to make quick and accurate decisions. The emerging field of visual analytics holds tremendous promise for improving the business practices in today’s electric power industry. The conducted investigation, however, has revealed that the existing commercial power grid visualization tools heavily rely on human designers, hindering user’s ability to discover. Additionally, for a large grid, it is very labor-intensive and costly to build and maintain the pre-designed visual displays. This project proposes amore » data-driven approach to overcome the common challenges. The proposed approach relies on developing powerful data manipulation algorithms to create visualizations based on the characteristics of empirically or mathematically derived data. The resulting visual presentations emphasize what the data is rather than how the data should be presented, thus fostering comprehension and discovery. Furthermore, the data-driven approach formulates visualizations on-the-fly. It does not require a visualization design stage, completely eliminating or significantly reducing the cost for building and maintaining visual displays. The research and development (R&D) conducted in this project is mainly divided into two phases. The first phase (Phase I & II) focuses on developing data driven techniques for visualization of power grid and its operation. Various data-driven visualization techniques were investigated, including pattern recognition for auto-generation of one-line diagrams, fuzzy model based rich data visualization for situational awareness, etc. The R&D conducted during the second phase (Phase IIB) focuses on enhancing the prototyped data driven visualization tool based on the gathered requirements and use cases. The goal is to evolve the prototyped tool developed during the first phase into a commercial grade product. We will use one of the identified application areas as an example to demonstrate how research results achieved in this project are successfully utilized to address an emerging industry need. In summary, the data-driven visualization approach developed in this project has proven to be promising for building the next-generation power grid visualization tools. Application of this approach has resulted in a state-of-the-art commercial tool currently being leveraged by more than 60 utility organizations in North America and Europe .« less
NASA Astrophysics Data System (ADS)
Chaudhary, A.
2017-12-01
Current simulation models and sensors are producing high-resolution, high-velocity data in geosciences domain. Knowledge discovery from these complex and large size datasets require tools that are capable of handling very large data and providing interactive data analytics features to researchers. To this end, Kitware and its collaborators are producing open-source tools GeoNotebook, GeoJS, Gaia, and Minerva for geosciences that are using hardware accelerated graphics and advancements in parallel and distributed processing (Celery and Apache Spark) and can be loosely coupled to solve real-world use-cases. GeoNotebook (https://github.com/OpenGeoscience/geonotebook) is co-developed by Kitware and NASA-Ames and is an extension to the Jupyter Notebook. It provides interactive visualization and python-based analysis of geospatial data and depending the backend (KTile or GeoPySpark) can handle data sizes of Hundreds of Gigabytes to Terabytes. GeoNotebook uses GeoJS (https://github.com/OpenGeoscience/geojs) to render very large geospatial data on the map using WebGL and Canvas2D API. GeoJS is more than just a GIS library as users can create scientific plots such as vector and contour and can embed InfoVis plots using D3.js. GeoJS aims for high-performance visualization and interactive data exploration of scientific and geospatial location aware datasets and supports features such as Point, Line, Polygon, and advanced features such as Pixelmap, Contour, Heatmap, and Choropleth. Our another open-source tool Minerva ((https://github.com/kitware/minerva) is a geospatial application that is built on top of open-source web-based data management system Girder (https://github.com/girder/girder) which provides an ability to access data from HDFS or Amazon S3 buckets and provides capabilities to perform visualization and analyses on geosciences data in a web environment using GDAL and GeoPandas wrapped in a unified API provided by Gaia (https://github.com/OpenDataAnalytics/gaia). In this presentation, we will discuss core features of each of these tools and will present lessons learned on handling large data in the context of data management, analyses and visualization.
NASA Technical Reports Server (NTRS)
Garbeff, Theodore J., II; Baerny, Jennifer K.
2017-01-01
The following details recent efforts undertaken at the NASA Ames Unitary Plan wind tunnels to design and deploy an advanced, production-level infrared (IR) flow visualization data system. Highly sensitive IR cameras, coupled with in-line image processing, have enabled the visualization of wind tunnel model surface flow features as they develop in real-time. Boundary layer transition, shock impingement, junction flow, vortex dynamics, and buffet are routinely observed in both transonic and supersonic flow regimes all without the need of dedicated ramps in test section total temperature. Successful measurements have been performed on wing-body sting mounted test articles, semi-span floor mounted aircraft models, and sting mounted launch vehicle configurations. The unique requirements of imaging in production wind tunnel testing has led to advancements in the deployment of advanced IR cameras in a harsh test environment, robust data acquisition storage and workflow, real-time image processing algorithms, and evaluation of optimal surface treatments. The addition of a multi-camera IR flow visualization data system to the Ames UPWT has demonstrated itself to be a valuable analyses tool in the study of new and old aircraft/launch vehicle aerodynamics and has provided new insight for the evaluation of computational techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ronald W.; Collins, Benjamin S.; Godfrey, Andrew T.
2016-12-09
In order to support engineering analysis of Virtual Environment for Reactor Analysis (VERA) model results, the Consortium for Advanced Simulation of Light Water Reactors (CASL) needs a tool that provides visualizations of HDF5 files that adhere to the VERAOUT specification. VERAView provides an interactive graphical interface for the visualization and engineering analyses of output data from VERA. The Python-based software provides instantaneous 2D and 3D images, 1D plots, and alphanumeric data from VERA multi-physics simulations.
Visualization of LC-MS/MS proteomics data in MaxQuant.
Tyanova, Stefka; Temu, Tikira; Carlson, Arthur; Sinitcyn, Pavel; Mann, Matthias; Cox, Juergen
2015-04-01
Modern software platforms enable the analysis of shotgun proteomics data in an automated fashion resulting in high quality identification and quantification results. Additional understanding of the underlying data can be gained with the help of advanced visualization tools that allow for easy navigation through large LC-MS/MS datasets potentially consisting of terabytes of raw data. The updated MaxQuant version has a map navigation component that steers the users through mass and retention time-dependent mass spectrometric signals. It can be used to monitor a peptide feature used in label-free quantification over many LC-MS runs and visualize it with advanced 3D graphic models. An expert annotation system aids the interpretation of the MS/MS spectra used for the identification of these peptide features. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
REX Control System is a professional advanced tool for design and implementation of complex control systems that belongs to softPLC category. It covers the entire process starting from simulation of functionality of the application before deployment, through implementation on real-time target, towards analysis, diagnostics and visualization. Basically it consists of two parts: the development tools and the runtime system. It is also compatible with Simulink environment, and the way of implementation of control algorithm is very similar. The control scheme is finally compiled (using RexDraw utility) and uploaded into a chosen real-time target (using RexView utility). There is a wide variety of hardware platforms and real-time operating systems supported by REX Control System such as for example Windows Embedded, Linux, Linux/Xenomai deployed on SBC, IPC, PAC, Raspberry Pi and others with many I/O interfaces. It is modern system designed both for measurement and control applications, offering a lot of additional functions concerning data archiving, visualization based on HTML5, and communication standards. The paper will sum up possibilities of its use in educational process, focused on control of case studies of physical models with classical and advanced control algorithms.
Advanced imaging in acute and chronic deep vein thrombosis
Karande, Gita Yashwantrao; Sanchez, Yadiel; Baliyan, Vinit; Mishra, Vishala; Ganguli, Suvranu; Prabhakar, Anand M.
2016-01-01
Deep venous thrombosis (DVT) affecting the extremities is a common clinical problem. Prompt imaging aids in rapid diagnosis and adequate treatment. While ultrasound (US) remains the workhorse of detection of extremity venous thrombosis, CT and MRI are commonly used as the problem-solving tools either to visualize the thrombosis in central veins like superior or inferior vena cava (IVC) or to test for the presence of complications like pulmonary embolism (PE). The cross-sectional modalities also offer improved visualization of venous collaterals. The purpose of this article is to review the established modalities used for characterization and diagnosis of DVT, and further explore promising innovations and recent advances in this field. PMID:28123971
Distributed augmented reality with 3-D lung dynamics--a planning tool concept.
Hamza-Lup, Felix G; Santhanam, Anand P; Imielińska, Celina; Meeks, Sanford L; Rolland, Jannick P
2007-01-01
Augmented reality (AR) systems add visual information to the world by using advanced display techniques. The advances in miniaturization and reduced hardware costs make some of these systems feasible for applications in a wide set of fields. We present a potential component of the cyber infrastructure for the operating room of the future: a distributed AR-based software-hardware system that allows real-time visualization of three-dimensional (3-D) lung dynamics superimposed directly on the patient's body. Several emergency events (e.g., closed and tension pneumothorax) and surgical procedures related to lung (e.g., lung transplantation, lung volume reduction surgery, surgical treatment of lung infections, lung cancer surgery) could benefit from the proposed prototype.
MATISSE a web-based tool to access, visualize and analyze high resolution minor bodies observation
NASA Astrophysics Data System (ADS)
Zinzi, Angelo; Capria, Maria Teresa; Palomba, Ernesto; Antonelli, Lucio Angelo; Giommi, Paolo
2016-07-01
In the recent years planetary exploration missions acquired data from minor bodies (i.e., dwarf planets, asteroid and comets) at a detail level never reached before. Since these objects often present very irregular shapes (as in the case of the comet 67P Churyumov-Gerasimenko target of the ESA Rosetta mission) "classical" bidimensional projections of observations are difficult to understand. With the aim of providing the scientific community a tool to access, visualize and analyze data in a new way, ASI Science Data Center started to develop MATISSE (Multi-purposed Advanced Tool for the Instruments for the Solar System Exploration - http://tools.asdc.asi.it/matisse.jsp) in late 2012. This tool allows 3D web-based visualization of data acquired by planetary exploration missions: the output could either be the straightforward projection of the selected observation over the shape model of the target body or the visualization of a high-order product (average/mosaic, difference, ratio, RGB) computed directly online with MATISSE. Standard outputs of the tool also comprise downloadable files to be used with GIS software (GeoTIFF and ENVI format) and 3D very high-resolution files to be viewed by means of the free software Paraview. During this period the first and most frequent exploitation of the tool has been related to visualization of data acquired by VIRTIS-M instruments onboard Rosetta observing the comet 67P. The success of this task, well represented by the good number of published works that used images made with MATISSE confirmed the need of a different approach to correctly visualize data coming from irregular shaped bodies. In the next future the datasets available to MATISSE are planned to be extended, starting from the addition of VIR-Dawn observations of both Vesta and Ceres and also using standard protocols to access data stored in external repositories, such as NASA ODE and Planetary VO.
ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.
Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y
2008-08-12
New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges according to associated data values. We demonstrated the advantages of these new capabilities through three biological network visualization case studies: human disease association network, drug-target interaction network and protein-peptide mapping network. The architectural design of ProteoLens makes it suitable for bioinformatics expert data analysts who are experienced with relational database management to perform large-scale integrated network visual explorations. ProteoLens is a promising visual analytic platform that will facilitate knowledge discoveries in future network and systems biology studies.
Advances in visual representation of molecular potentials.
Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen
2010-06-01
The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.
The BioCyc collection of microbial genomes and metabolic pathways.
Karp, Peter D; Billington, Richard; Caspi, Ron; Fulcher, Carol A; Latendresse, Mario; Kothari, Anamika; Keseler, Ingrid M; Krummenacker, Markus; Midford, Peter E; Ong, Quang; Ong, Wai Kit; Paley, Suzanne M; Subhraveti, Pallavi
2017-08-17
BioCyc.org is a microbial genome Web portal that combines thousands of genomes with additional information inferred by computer programs, imported from other databases and curated from the biomedical literature by biologist curators. BioCyc also provides an extensive range of query tools, visualization services and analysis software. Recent advances in BioCyc include an expansion in the content of BioCyc in terms of both the number of genomes and the types of information available for each genome; an expansion in the amount of curated content within BioCyc; and new developments in the BioCyc software tools including redesigned gene/protein pages and metabolite pages; new search tools; a new sequence-alignment tool; a new tool for visualizing groups of related metabolic pathways; and a facility called SmartTables, which enables biologists to perform analyses that previously would have required a programmer's assistance. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Tools for Visualizing HIV in Cure Research.
Niessl, Julia; Baxter, Amy E; Kaufmann, Daniel E
2018-02-01
The long-lived HIV reservoir remains a major obstacle for an HIV cure. Current techniques to analyze this reservoir are generally population-based. We highlight recent developments in methods visualizing HIV, which offer a different, complementary view, and provide indispensable information for cure strategy development. Recent advances in fluorescence in situ hybridization techniques enabled key developments in reservoir visualization. Flow cytometric detection of HIV mRNAs, concurrently with proteins, provides a high-throughput approach to study the reservoir on a single-cell level. On a tissue level, key spatial information can be obtained detecting viral RNA and DNA in situ by fluorescence microscopy. At total-body level, advancements in non-invasive immuno-positron emission tomography (PET) detection of HIV proteins may allow an encompassing view of HIV reservoir sites. HIV imaging approaches provide important, complementary information regarding the size, phenotype, and localization of the HIV reservoir. Visualizing the reservoir may contribute to the design, assessment, and monitoring of HIV cure strategies in vitro and in vivo.
Data-Driven Healthcare: Challenges and Opportunities for Interactive Visualization.
Gotz, David; Borland, David
2016-01-01
The healthcare industry's widespread digitization efforts are reshaping one of the largest sectors of the world's economy. This transformation is enabling systems that promise to use ever-improving data-driven evidence to help doctors make more precise diagnoses, institutions identify at risk patients for intervention, clinicians develop more personalized treatment plans, and researchers better understand medical outcomes within complex patient populations. Given the scale and complexity of the data required to achieve these goals, advanced data visualization tools have the potential to play a critical role. This article reviews a number of visualization challenges unique to the healthcare discipline.
Using Firefly Tools to Enhance Archive Web Pages
NASA Astrophysics Data System (ADS)
Roby, W.; Wu, X.; Ly, L.; Goldina, T.
2013-10-01
Astronomy web developers are looking for fast and powerful HTML 5/AJAX tools to enhance their web archives. We are exploring ways to make this easier for the developer. How could you have a full FITS visualizer or a Web 2.0 table that supports paging, sorting, and filtering in your web page in 10 minutes? Can it be done without even installing any software or maintaining a server? Firefly is a powerful, configurable system for building web-based user interfaces to access astronomy science archives. It has been in production for the past three years. Recently, we have made some of the advanced components available through very simple JavaScript calls. This allows a web developer, without any significant knowledge of Firefly, to have FITS visualizers, advanced table display, and spectrum plots on their web pages with minimal learning curve. Because we use cross-site JSONP, installing a server is not necessary. Web sites that use these tools can be created in minutes. Firefly was created in IRSA, the NASA/IPAC Infrared Science Archive (http://irsa.ipac.caltech.edu). We are using Firefly to serve many projects including Spitzer, Planck, WISE, PTF, LSST and others.
Engineering visualization utilizing advanced animation
NASA Technical Reports Server (NTRS)
Sabionski, Gunter R.; Robinson, Thomas L., Jr.
1989-01-01
Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.
Experimenter's laboratory for visualized interactive science
NASA Technical Reports Server (NTRS)
Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.
1992-01-01
The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.
Experimenter's laboratory for visualized interactive science
NASA Technical Reports Server (NTRS)
Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.
1993-01-01
The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperatable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color-shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.
An online model composition tool for system biology models
2013-01-01
Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914
Transforming Clinical Imaging Data for Virtual Reality Learning Objects
ERIC Educational Resources Information Center
Trelease, Robert B.; Rosset, Antoine
2008-01-01
Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…
ARC integration into the NEAMS Workbench
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stauff, N.; Gaughan, N.; Kim, T.
2017-01-01
One of the objectives of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Integration Product Line (IPL) is to facilitate the deployment of the high-fidelity codes developed within the program. The Workbench initiative was launched in FY-2017 by the IPL to facilitate the transition from conventional tools to high fidelity tools. The Workbench provides a common user interface for model creation, real-time validation, execution, output processing, and visualization for integrated codes.
Spectacle and SpecViz: New Spectral Analysis and Visualization Tools
NASA Astrophysics Data System (ADS)
Earl, Nicholas; Peeples, Molly; JDADF Developers
2018-01-01
A new era of spectroscopic exploration of our universe is being ushered in with advances in instrumentation and next-generation space telescopes. The advent of new spectroscopic instruments has highlighted a pressing need for tools scientists can use to analyze and explore these new data. We have developed Spectacle, a software package for analyzing both synthetic spectra from hydrodynamic simulations as well as real COS data with an aim of characterizing the behavior of the circumgalactic medium. It allows easy reduction of spectral data and analytic line generation capabilities. Currently, the package is focused on automatic determination of absorption regions and line identification with custom line list support, simultaneous line fitting using Voigt profiles via least-squares or MCMC methods, and multi-component modeling of blended features. Non-parametric measurements, such as equivalent widths, delta v90, and full-width half-max are available. Spectacle also provides the ability to compose compound models used to generate synthetic spectra allowing the user to define various LSF kernels, uncertainties, and to specify sampling.We also present updates to the visualization tool SpecViz, developed in conjunction with the JWST data analysis tools development team, to aid in the exploration of spectral data. SpecViz is an open source, Python-based spectral 1-D interactive visualization and analysis application built around high-performance interactive plotting. It supports handling general and instrument-specific data and includes advanced tool-sets for filtering and detrending one-dimensional data, along with the ability to isolate absorption regions using slicing and manipulate spectral features via spectral arithmetic. Multi-component modeling is also possible using a flexible model fitting tool-set that supports custom models to be used with various fitting routines. It also features robust user extensions such as custom data loaders and support for user-created plugins that add new functionality.This work was supported in part by HST AR #13919, HST GO #14268, and HST AR #14560.
Epiviz: a view inside the design of an integrated visual analysis software for genomics
2015-01-01
Background Computational and visual data analysis for genomics has traditionally involved a combination of tools and resources, of which the most ubiquitous consist of genome browsers, focused mainly on integrative visualization of large numbers of big datasets, and computational environments, focused on data modeling of a small number of moderately sized datasets. Workflows that involve the integration and exploration of multiple heterogeneous data sources, small and large, public and user specific have been poorly addressed by these tools. In our previous work, we introduced Epiviz, which bridges the gap between the two types of tools, simplifying these workflows. Results In this paper we expand on the design decisions behind Epiviz, and introduce a series of new advanced features that further support the type of interactive exploratory workflow we have targeted. We discuss three ways in which Epiviz advances the field of genomic data analysis: 1) it brings code to interactive visualizations at various different levels; 2) takes the first steps in the direction of collaborative data analysis by incorporating user plugins from source control providers, as well as by allowing analysis states to be shared among the scientific community; 3) combines established analysis features that have never before been available simultaneously in a genome browser. In our discussion section, we present security implications of the current design, as well as a series of limitations and future research steps. Conclusions Since many of the design choices of Epiviz are novel in genomics data analysis, this paper serves both as a document of our own approaches with lessons learned, as well as a start point for future efforts in the same direction for the genomics community. PMID:26328750
Frøisland, Dag Helge; Årsand, Eirik
2015-05-01
The goal of modern diabetes treatment is to a large extent focused on self-management to achieve and maintain a healthy, low HbA1c. Despite all new technical diabetes tools and support, including advanced blood glucose meters and insulin delivery systems, diabetes patients still struggle to achieve international treatment goals, that is, HbA1c < 7.5 in children and adolescents. In this study we developed and tested a mobile-phone-based tool to capture and visualize adolescents' food intake. Our aim was to affect understanding of carbohydrate counting and also to facilitate doctor-adolescent communication with regard to daily treatment. Furthermore, we wanted to evaluate the effect of the designed tool with regard to empowerment, self-efficacy, and self-treatment. The study concludes that implementing a visualization tool is an important contribution for young people to understand the basics of diabetes and to empower young people to define their treatment challenges. By capturing a picture of their own food, the person's own feeling of being in charge can be affected and better self-treatment achieved. © 2015 Diabetes Technology Society.
Automation of Coordinated Planning Between Observatories: The Visual Observation Layout Tool (VOLT)
NASA Technical Reports Server (NTRS)
Maks, Lori; Koratkar, Anuradha; Kerbel, Uri; Pell, Vince
2002-01-01
Fulfilling the promise of the era of great observatories, NASA now has more than three space-based astronomical telescopes operating in different wavebands. This situation provides astronomers with the unique opportunity of simultaneously observing a target in multiple wavebands with these observatories. Currently scheduling multiple observatories simultaneously, for coordinated observations, is highly inefficient. Coordinated observations require painstaking manual collaboration among the observatory staff at each observatory. Because they are time-consuming and expensive to schedule, observatories often limit the number of coordinated observations that can be conducted. In order to exploit new paradigms for observatory operation, the Advanced Architectures and Automation Branch of NASA's Goddard Space Flight Center has developed a tool called the Visual Observation Layout Tool (VOLT). The main objective of VOLT is to provide a visual tool to automate the planning of coordinated observations by multiple astronomical observatories. Four of NASA's space-based astronomical observatories - the Hubble Space Telescope (HST), Far Ultraviolet Spectroscopic Explorer (FUSE), Rossi X-ray Timing Explorer (RXTE) and Chandra - are enthusiastically pursuing the use of VOLT. This paper will focus on the purpose for developing VOLT, as well as the lessons learned during the infusion of VOLT into the planning and scheduling operations of these observatories.
chimeraviz: a tool for visualizing chimeric RNA.
Lågstad, Stian; Zhao, Sen; Hoff, Andreas M; Johannessen, Bjarne; Lingjærde, Ole Christian; Skotheim, Rolf I
2017-09-15
Advances in high-throughput RNA sequencing have enabled more efficient detection of fusion transcripts, but the technology and associated software used for fusion detection from sequencing data often yield a high false discovery rate. Good prioritization of the results is important, and this can be helped by a visualization framework that automatically integrates RNA data with known genomic features. Here we present chimeraviz , a Bioconductor package that automates the creation of chimeric RNA visualizations. The package supports input from nine different fusion-finder tools: deFuse, EricScript, InFusion, JAFFA, FusionCatcher, FusionMap, PRADA, SOAPfuse and STAR-FUSION. chimeraviz is an R package available via Bioconductor ( https://bioconductor.org/packages/release/bioc/html/chimeraviz.html ) under Artistic-2.0. Source code and support is available at GitHub ( https://github.com/stianlagstad/chimeraviz ). rolf.i.skotheim@rr-research.no. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
New Abstraction Networks and a New Visualization Tool in Support of Auditing the SNOMED CT Content
Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan
2012-01-01
Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT. PMID:23304293
New abstraction networks and a new visualization tool in support of auditing the SNOMED CT content.
Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan
2012-01-01
Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT.
XML-Based Visual Specification of Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad
2001-01-01
The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.
MATISSE: A novel tool to access, visualize and analyse data from planetary exploration missions
NASA Astrophysics Data System (ADS)
Zinzi, A.; Capria, M. T.; Palomba, E.; Giommi, P.; Antonelli, L. A.
2016-04-01
The increasing number and complexity of planetary exploration space missions require new tools to access, visualize and analyse data to improve their scientific return. ASI Science Data Center (ASDC) addresses this request with the web-tool MATISSE (Multi-purpose Advanced Tool for the Instruments of the Solar System Exploration), allowing the visualization of single observation or real-time computed high-order products, directly projected on the three-dimensional model of the selected target body. Using MATISSE it will be no longer needed to download huge quantity of data or to write down a specific code for every instrument analysed, greatly encouraging studies based on joint analysis of different datasets. In addition the extremely high-resolution output, to be used offline with a Python-based free software, together with the files to be read with specific GIS software, makes it a valuable tool to further process the data at the best spatial accuracy available. MATISSE modular structure permits addition of new missions or tasks and, thanks to dedicated future developments, it would be possible to make it compliant to the Planetary Virtual Observatory standards currently under definition. In this context the recent development of an interface to the NASA ODE REST API by which it is possible to access to public repositories is set.
Lidke, Diane S; Lidke, Keith A
2012-06-01
A fundamental goal in biology is to determine how cellular organization is coupled to function. To achieve this goal, a better understanding of organelle composition and structure is needed. Although visualization of cellular organelles using fluorescence or electron microscopy (EM) has become a common tool for the cell biologist, recent advances are providing a clearer picture of the cell than ever before. In particular, advanced light-microscopy techniques are achieving resolutions below the diffraction limit and EM tomography provides high-resolution three-dimensional (3D) images of cellular structures. The ability to perform both fluorescence and electron microscopy on the same sample (correlative light and electron microscopy, CLEM) makes it possible to identify where a fluorescently labeled protein is located with respect to organelle structures visualized by EM. Here, we review the current state of the art in 3D biological imaging techniques with a focus on recent advances in electron microscopy and fluorescence super-resolution techniques.
Desktop Publishing: New Right Brain Documents.
ERIC Educational Resources Information Center
Williams, James B.; Murr, Lawrence E.
1987-01-01
Supporting evidence from both neurological research in brain hemisphere functions and comparisons of the use of symbols in Eastern and Western cultures are used to advance the position that the capability of graphics software for microcomputers to combine textual and visual elements makes them a powerful and revolutionary communications tool. (CLB)
Computer-Aided Tomography (CT) has been demonstrated to be a cost efficient tool for the qualitative and quantitative study of estuarine benthic communities along pollution gradients.
Now we have advanced this technology to successfully visualize and discriminate three dimen...
Y0: An innovative tool for spatial data analysis
NASA Astrophysics Data System (ADS)
Wilson, Jeremy C.
1993-08-01
This paper describes an advanced analysis and visualization tool, called Y0 (pronounced ``Why not?!''), that has been developed to directly support the scientific process for earth and space science research. Y0 aids the scientific research process by enabling the user to formulate algorithms and models within an integrated environment, and then interactively explore the solution space with the aid of appropriate visualizations. Y0 has been designed to provide strong support for both quantitative analysis and rich visualization. The user's algorithm or model is defined in terms of algebraic formulas in cells on worksheets, in a similar fashion to spreadsheet programs. Y0 is specifically designed to provide the data types and rich function set necessary for effective analysis and manipulation of remote sensing data. This includes various types of arrays, geometric objects, and objects for representing geographic coordinate system mappings. Visualization of results is tailored to the needs of remote sensing, with straightforward methods of composing, comparing, and animating imagery and graphical information, with reference to geographical coordinate systems. Y0 is based on advanced object-oriented technology. It is implemented in C++ for use in Unix environments, with a user interface based on the X window system. Y0 has been delivered under contract to Unidata, a group which provides data and software support to atmospheric researches in universities affiliated with UCAR. This paper will explore the key concepts in Y0, describe its utility for remote sensing analysis and visualization, and will give a specific example of its application to the problem of measuring glacier flow rates from Landsat imagery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozana, Stepan, E-mail: stepan.ozana@vsb.cz; Pies, Martin, E-mail: martin.pies@vsb.cz; Docekal, Tomas, E-mail: docekalt@email.cz
REX Control System is a professional advanced tool for design and implementation of complex control systems that belongs to softPLC category. It covers the entire process starting from simulation of functionality of the application before deployment, through implementation on real-time target, towards analysis, diagnostics and visualization. Basically it consists of two parts: the development tools and the runtime system. It is also compatible with Simulink environment, and the way of implementation of control algorithm is very similar. The control scheme is finally compiled (using RexDraw utility) and uploaded into a chosen real-time target (using RexView utility). There is a widemore » variety of hardware platforms and real-time operating systems supported by REX Control System such as for example Windows Embedded, Linux, Linux/Xenomai deployed on SBC, IPC, PAC, Raspberry Pi and others with many I/O interfaces. It is modern system designed both for measurement and control applications, offering a lot of additional functions concerning data archiving, visualization based on HTML5, and communication standards. The paper will sum up possibilities of its use in educational process, focused on control of case studies of physical models with classical and advanced control algorithms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowson, Scott T.; Bruce, Joseph R.; Best, Daniel M.
2009-04-14
This paper presents key components of the Law Enforcement Information Framework (LEIF) that provides communications, situational awareness, and visual analytics tools in a service-oriented architecture supporting web-based desktop and handheld device users. LEIF simplifies interfaces and visualizations of well-established visual analytical techniques to improve usability. Advanced analytics capability is maintained by enhancing the underlying processing to support the new interface. LEIF development is driven by real-world user feedback gathered through deployments at three operational law enforcement organizations in the US. LEIF incorporates a robust information ingest pipeline supporting a wide variety of information formats. LEIF also insulates interface and analyticalmore » components from information sources making it easier to adapt the framework for many different data repositories.« less
Visualizing the ground motions of the 1906 San Francisco earthquake
Chourasia, A.; Cutchin, S.; Aagaard, Brad T.
2008-01-01
With advances in computational capabilities and refinement of seismic wave-propagation models in the past decade large three-dimensional simulations of earthquake ground motion have become possible. The resulting datasets from these simulations are multivariate, temporal and multi-terabyte in size. Past visual representations of results from seismic studies have been largely confined to static two-dimensional maps. New visual representations provide scientists with alternate ways of viewing and interacting with these results potentially leading to new and significant insight into the physical phenomena. Visualizations can also be used for pedagogic and general dissemination purposes. We present a workflow for visual representation of the data from a ground motion simulation of the great 1906 San Francisco earthquake. We have employed state of the art animation tools for visualization of the ground motions with a high degree of accuracy and visual realism. ?? 2008 Elsevier Ltd.
High-power graphic computers for visual simulation: a real-time--rendering revolution
NASA Technical Reports Server (NTRS)
Kaiser, M. K.
1996-01-01
Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.
Enabling Rapid Naval Architecture Design Space Exploration
NASA Technical Reports Server (NTRS)
Mueller, Michael A.; Dufresne, Stephane; Balestrini-Robinson, Santiago; Mavris, Dimitri
2011-01-01
Well accepted conceptual ship design tools can be used to explore a design space, but more precise results can be found using detailed models in full-feature computer aided design programs. However, defining a detailed model can be a time intensive task and hence there is an incentive for time sensitive projects to use conceptual design tools to explore the design space. In this project, the combination of advanced aerospace systems design methods and an accepted conceptual design tool facilitates the creation of a tool that enables the user to not only visualize ship geometry but also determine design feasibility and estimate the performance of a design.
Geoscience data visualization and analysis using GeoMapApp
NASA Astrophysics Data System (ADS)
Ferrini, Vicki; Carbotte, Suzanne; Ryan, William; Chan, Samantha
2013-04-01
Increased availability of geoscience data resources has resulted in new opportunities for developing visualization and analysis tools that not only promote data integration and synthesis, but also facilitate quantitative cross-disciplinary access to data. Interdisciplinary investigations, in particular, frequently require visualizations and quantitative access to specialized data resources across disciplines, which has historically required specialist knowledge of data formats and software tools. GeoMapApp (www.geomapapp.org) is a free online data visualization and analysis tool that provides direct quantitative access to a wide variety of geoscience data for a broad international interdisciplinary user community. While GeoMapApp provides access to online data resources, it can also be packaged to work offline through the deployment of a small portable hard drive. This mode of operation can be particularly useful during field programs to provide functionality and direct access to data when a network connection is not possible. Hundreds of data sets from a variety of repositories are directly accessible in GeoMapApp, without the need for the user to understand the specifics of file formats or data reduction procedures. Available data include global and regional gridded data, images, as well as tabular and vector datasets. In addition to basic visualization and data discovery functionality, users are provided with simple tools for creating customized maps and visualizations and to quantitatively interrogate data. Specialized data portals with advanced functionality are also provided for power users to further analyze data resources and access underlying component datasets. Users may import and analyze their own geospatial datasets by loading local versions of geospatial data and can access content made available through Web Feature Services (WFS) and Web Map Services (WMS). Once data are loaded in GeoMapApp, a variety options are provided to export data and/or 2D/3D visualizations into common formats including grids, images, text files, spreadsheets, etc. Examples of interdisciplinary investigations that make use of GeoMapApp visualization and analysis functionality will be provided.
Latychevskaia, Tatiana; Wicki, Flavio; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner
2016-09-14
Visualizing individual charges confined to molecules and observing their dynamics with high spatial resolution is a challenge for advancing various fields in science, ranging from mesoscopic physics to electron transfer events in biological molecules. We show here that the high sensitivity of low-energy electrons to local electric fields can be employed to directly visualize individual charged adsorbates and to study their behavior in a quantitative way. This makes electron holography a unique probing tool for directly visualizing charge distributions with a sensitivity of a fraction of an elementary charge. Moreover, spatial resolution in the nanometer range and fast data acquisition inherent to lens-less low-energy electron holography allows for direct visual inspection of charge transfer processes.
Distributed telemedicine for the National Information Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forslund, D.W.; Lee, Seong H.; Reverbel, F.C.
1997-08-01
TeleMed is an advanced system that provides a distributed multimedia electronic medical record available over a wide area network. It uses object-based computing, distributed data repositories, advanced graphical user interfaces, and visualization tools along with innovative concept extraction of image information for storing and accessing medical records developed in a separate project from 1994-5. In 1996, we began the transition to Java, extended the infrastructure, and worked to begin deploying TeleMed-like technologies throughout the nation. Other applications are mentioned.
SpacePy - a Python-based library of tools for the space sciences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven K; Welling, Daniel T; Koller, Josef
Space science deals with the bodies within the solar system and the interplanetary medium; the primary focus is on atmospheres and above - at Earth the short timescale variation in the the geomagnetic field, the Van Allen radiation belts and the deposition of energy into the upper atmosphere are key areas of investigation. SpacePy is a package for Python, targeted at the space sciences, that aims to make basic data analysis, modeling and visualization easier. It builds on the capabilities of the well-known NumPy and MatPlotLib packages. Publication quality output direct from analyses is emphasized. The SpacePy project seeks tomore » promote accurate and open research standards by providing an open environment for code development. In the space physics community there has long been a significant reliance on proprietary languages that restrict free transfer of data and reproducibility of results. By providing a comprehensive, open-source library of widely used analysis and visualization tools in a free, modern and intuitive language, we hope that this reliance will be diminished. SpacePy includes implementations of widely used empirical models, statistical techniques used frequently in space science (e.g. superposed epoch analysis), and interfaces to advanced tools such as electron drift shell calculations for radiation belt studies. SpacePy also provides analysis and visualization tools for components of the Space Weather Modeling Framework - currently this only includes the BATS-R-US 3-D magnetohydrodynamic model and the RAM ring current model - including streamline tracing in vector fields. Further development is currently underway. External libraries, which include well-known magnetic field models, high-precision time conversions and coordinate transformations are wrapped for access from Python using SWIG and f2py. The rest of the tools have been implemented directly in Python. The provision of open-source tools to perform common tasks will provide openness in the analysis methods employed in scientific studies and will give access to advanced tools to all space scientists regardless of affiliation or circumstance.« less
NASA Astrophysics Data System (ADS)
Rose, K.; Bauer, J.; Baker, D.; Barkhurst, A.; Bean, A.; DiGiulio, J.; Jones, K.; Jones, T.; Justman, D.; Miller, R., III; Romeo, L.; Sabbatino, M.; Tong, A.
2017-12-01
As spatial datasets are increasingly accessible through open, online systems, the opportunity to use these resources to address a range of Earth system questions grows. Simultaneously, there is a need for better infrastructure and tools to find and utilize these resources. We will present examples of advanced online computing capabilities, hosted in the U.S. DOE's Energy Data eXchange (EDX), that address these needs for earth-energy research and development. In one study the computing team developed a custom, machine learning, big data computing tool designed to parse the web and return priority datasets to appropriate servers to develop an open-source global oil and gas infrastructure database. The results of this spatial smart search approach were validated against expert-driven, manual search results which required a team of seven spatial scientists three months to produce. The custom machine learning tool parsed online, open systems, including zip files, ftp sites and other web-hosted resources, in a matter of days. The resulting resources were integrated into a geodatabase now hosted for open access via EDX. Beyond identifying and accessing authoritative, open spatial data resources, there is also a need for more efficient tools to ingest, perform, and visualize multi-variate, spatial data analyses. Within the EDX framework, there is a growing suite of processing, analytical and visualization capabilities that allow multi-user teams to work more efficiently in private, virtual workspaces. An example of these capabilities are a set of 5 custom spatio-temporal models and data tools that form NETL's Offshore Risk Modeling suite that can be used to quantify oil spill risks and impacts. Coupling the data and advanced functions from EDX with these advanced spatio-temporal models has culminated with an integrated web-based decision-support tool. This platform has capabilities to identify and combine data across scales and disciplines, evaluate potential environmental, social, and economic impacts, highlight knowledge or technology gaps, and reduce uncertainty for a range of `what if' scenarios relevant to oil spill prevention efforts. These examples illustrate EDX's growing capabilities for advanced spatial data search and analysis to support geo-data science needs.
NASA Technical Reports Server (NTRS)
Casas, Joseph
2017-01-01
Within the IARPC Collaboration Team activities of 2016, Arctic in-situ and remote earth observations advanced topics such as :1) exploring the role for new and innovative autonomous observing technologies in the Arctic; 2) advancing catalytic national and international community based observing efforts in support of the National Strategy for the Arctic Region; and 3) enhancing the use of discovery tools for observing system collaboration such as the U.S. National Oceanic and Atmospheric Administration (NOAA) Arctic Environmental Response Management Application (ERMA) and the U.S. National Aeronautics and Space Administration (NASA) Arctic Collaborative Environment (ACE) project geo reference visualization decision support and exploitation internet based tools. Critical to the success of these earth observations for both in-situ and remote systems is the emerging of new and innovative data collection technologies and comprehensive modeling as well as enhanced communications and cyber infrastructure capabilities which effectively assimilate and dissemination many environmental intelligence products in a timely manner. The Arctic Collaborative Environment (ACE) project is well positioned to greatly enhance user capabilities for accessing, organizing, visualizing, sharing and producing collaborative knowledge for the Arctic.
iScreen: Image-Based High-Content RNAi Screening Analysis Tools.
Zhong, Rui; Dong, Xiaonan; Levine, Beth; Xie, Yang; Xiao, Guanghua
2015-09-01
High-throughput RNA interference (RNAi) screening has opened up a path to investigating functional genomics in a genome-wide pattern. However, such studies are often restricted to assays that have a single readout format. Recently, advanced image technologies have been coupled with high-throughput RNAi screening to develop high-content screening, in which one or more cell image(s), instead of a single readout, were generated from each well. This image-based high-content screening technology has led to genome-wide functional annotation in a wider spectrum of biological research studies, as well as in drug and target discovery, so that complex cellular phenotypes can be measured in a multiparametric format. Despite these advances, data analysis and visualization tools are still largely lacking for these types of experiments. Therefore, we developed iScreen (image-Based High-content RNAi Screening Analysis Tool), an R package for the statistical modeling and visualization of image-based high-content RNAi screening. Two case studies were used to demonstrate the capability and efficiency of the iScreen package. iScreen is available for download on CRAN (http://cran.cnr.berkeley.edu/web/packages/iScreen/index.html). The user manual is also available as a supplementary document. © 2014 Society for Laboratory Automation and Screening.
The multiple sclerosis visual pathway cohort: understanding neurodegeneration in MS.
Martínez-Lapiscina, Elena H; Fraga-Pumar, Elena; Gabilondo, Iñigo; Martínez-Heras, Eloy; Torres-Torres, Ruben; Ortiz-Pérez, Santiago; Llufriu, Sara; Tercero, Ana; Andorra, Magi; Roca, Marc Figueras; Lampert, Erika; Zubizarreta, Irati; Saiz, Albert; Sanchez-Dalmau, Bernardo; Villoslada, Pablo
2014-12-15
Multiple Sclerosis (MS) is an immune-mediated disease of the Central Nervous System with two major underlying etiopathogenic processes: inflammation and neurodegeneration. The latter determines the prognosis of this disease. MS is the main cause of non-traumatic disability in middle-aged populations. The MS-VisualPath Cohort was set up to study the neurodegenerative component of MS using advanced imaging techniques by focusing on analysis of the visual pathway in a middle-aged MS population in Barcelona, Spain. We started the recruitment of patients in the early phase of MS in 2010 and it remains permanently open. All patients undergo a complete neurological and ophthalmological examination including measurements of physical and disability (Expanded Disability Status Scale; Multiple Sclerosis Functional Composite and neuropsychological tests), disease activity (relapses) and visual function testing (visual acuity, color vision and visual field). The MS-VisualPath protocol also assesses the presence of anxiety and depressive symptoms (Hospital Anxiety and Depression Scale), general quality of life (SF-36) and visual quality of life (25-Item National Eye Institute Visual Function Questionnaire with the 10-Item Neuro-Ophthalmic Supplement). In addition, the imaging protocol includes both retinal (Optical Coherence Tomography and Wide-Field Fundus Imaging) and brain imaging (Magnetic Resonance Imaging). Finally, multifocal Visual Evoked Potentials are used to perform neurophysiological assessment of the visual pathway. The analysis of the visual pathway with advance imaging and electrophysilogical tools in parallel with clinical information will provide significant and new knowledge regarding neurodegeneration in MS and provide new clinical and imaging biomarkers to help monitor disease progression in these patients.
ASDC Advances in the Utilization of Microservices and Hybrid Cloud Environments
NASA Astrophysics Data System (ADS)
Baskin, W. E.; Herbert, A.; Mazaika, A.; Walter, J.
2017-12-01
The Atmospheric Science Data Center (ASDC) is transitioning many of its software tools and applications to standalone microservices deployable in a hybrid cloud, offering benefits such as scalability and efficient environment management. This presentation features several projects the ASDC staff have implemented leveraging the OpenShift Container Application Platform and OpenStack Hybrid Cloud Environment focusing on key tools and techniques applied to: Earth Science data processing Spatial-Temporal metadata generation, validation, repair, and curation Archived Data discovery, visualization, and access
STRING 3: An Advanced Groundwater Flow Visualization Tool
NASA Astrophysics Data System (ADS)
Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph
2016-04-01
The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of neighboring faces is extracted. Similar algorithms help to find the 2D boundary of cuts through the 3D model. As interactivity plays a big role for an exploration tool the speed of the drawing routines is also important. To achieve this, different pathlet rendering solutions have been developed and benchmarked. These provide a trade-off between the usage of geometry and fragment shaders. We show that point sprite shaders have superior performance and visual quality over geometry-based approaches. Admittedly, the point sprite-based approach has many non-trivial problems of joining the different parts of the pathlet geometry. This research is funded by the Federal Ministry for Economic Affairs and Energy (Germany). [1] T. Seidel, C. König, M. Schäfer, I. Ostermann, T. Biedert, D. Hietel (2014). Intuitive visualization of transient groundwater flow. Computers & Geosciences, Vol. 67, pp. 173-179 [2] I. Michel, S. Schröder, T. Seidel, C. König (2015). Intuitive Visualization of Transient Flow: Towards a Full 3D Tool. Geophysical Research Abstracts, Vol. 17, EGU2015-1670 [3] S. Schröder, I. Michel, T. Seidel, C.M. König (2015). STRING 3: Full 3D visualization of groundwater Flow. In Proceedings of IAMG 2015 Freiberg, pp. 813-822
Human Factors and Modeling Methods in the Development of Control Room Modernization Concepts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hugo, Jacques V.; Slay III, Lorenzo
nuclear power plants. Although the nuclear industry has made steady improvement in outage optimization, each day of a refueling outage still represents an opportunity to save millions of dollars and each day an outage extends past its planned end date represents millions of dollars that may have been spent unnecessarily. Reducing planned outage duration or preventing outage extensions requires careful management of the outage schedule as well as constant oversight and monitoring of work completion during the outage execution. During a typical outage, there are typically more than 10,000 activities on the schedule that, if not managed efficiently, may causemore » expensive outage delays. Management of outages currently relies largely on paper-based resources and general-purpose office software. A typical tool currently used to monitor work performance is a burn-down curve, where total remaining activities are plotted against the baseline schedule to track bulk work completion progress. While these tools are useful, there is still considerable uncertainty during a typical outage that bulk work progress is adequate and therefore a lot of management time is spent analyzing the situation on a daily basis. This paper describes recent advances made in developing a framework for the design of visual outage information presentation, as well as an overview of the scientific principles that informed the development of the visualizations. To test the utility of advanced visual outage information presentation, an outage management dashboard software application was created as part of the Department of Energy’s Advanced Outage Control Center project. This dashboard is intended to present all the critical information an outage manager would need to understand the current status of a refueling outage. The dashboard presents the critical path, bulk work performance, key performance indicators, outage milestones and metrics relating current performance to historical performance. Additionally, the dashboard includes data analysis tools to allow outage managers to drill down into the underlying data to understand the drivers of the indicators.« less
Real-time Magnetic Resonance Imaging Guidance for Cardiovascular Procedures
Horvath, Keith A.; Li, Ming; Mazilu, Dumitru; Guttman, Michael A.; McVeigh, Elliot R.
2008-01-01
Magnetic resonance imaging (MRI) of the cardiovascular system has proven to be an invaluable diagnostic tool. Given the ability to allow for real-time imaging, MRI guidance of intraoperative procedures can provide superb visualization which can facilitate a variety of interventions and minimize the trauma of the operations as well. In addition to the anatomic detail, MRI can provide intraoperative assessment of organ and device function. Instruments and devices can be marked to enhance visualization and tracking. All of which is an advance over standard x-ray or ultrasonic imaging. PMID:18395633
National Laboratory for Advanced Scientific Visualization at UNAM - Mexico
NASA Astrophysics Data System (ADS)
Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo
2016-04-01
In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires large quantity of memory as well as large and fast parallel storage systems. The entire system temperature is controlled by an energy and space efficient cooling solution, based on large rear door liquid cooled heat exchangers. This state-of-the-art infrastructure will boost research activities in the region, offer a powerful scientific tool for teaching at undergraduate and graduate levels, and enhance association and cooperation with business-oriented organizations.
Hovick, Shelly R; Bevers, Therese B; Vidrine, Jennifer Irvin; Kim, Stephanie; Dailey, Phokeng M; Jones, Lovell A; Peterson, Susan K
2017-03-01
Online cancer risk assessment tools, which provide personalized cancer information and recommendations based on personal data input by users, are a promising cancer education approach; however, few tools have been evaluated. A randomized controlled study was conducted to compare user impressions of one tool, Cancer Risk Check (CRC), to non-personalized educational information delivered online as series of self-advancing slides (the control). CRC users (N = 1452) rated the tool to be as interesting as the control (p > .05), but users were more likely to report that the information was difficult to understand and not applicable to them (p < .05). Information seeking and sharing also were lower among CRC users; thus, although impressions of CRC were favorable, it was not shown to be superior to existing approaches. We hypothesized CRC was less effective because it contained few visual and graphical elements; therefore, CRC was compared to a text-based control (online PDF file) post hoc. CRC users rated the information to be more interesting, less difficult to understand, and better able to hold their attention (p < .05). Post hoc results suggest the visual presentation of risk is critical to tool success.
Digital fabrication of textiles: an analysis of electrical networks in 3D knitted functional fabrics
NASA Astrophysics Data System (ADS)
Vallett, Richard; Knittel, Chelsea; Christe, Daniel; Castaneda, Nestor; Kara, Christina D.; Mazur, Krzysztof; Liu, Dani; Kontsos, Antonios; Kim, Youngmoo; Dion, Genevieve
2017-05-01
Digital fabrication methods are reshaping design and manufacturing processes through the adoption of pre-production visualization and analysis tools, which help minimize waste of materials and time. Despite the increasingly widespread use of digital fabrication techniques, comparatively few of these advances have benefited the design and fabrication of textiles. The development of functional fabrics such as knitted touch sensors, antennas, capacitors, and other electronic textiles could benefit from the same advances in electrical network modeling that revolutionized the design of integrated circuits. In this paper, the efficacy of using current state-of-the-art digital fabrication tools over the more common trialand- error methods currently used in textile design is demonstrated. Gaps are then identified in the current state-of-the-art tools that must be resolved to further develop and streamline the rapidly growing field of smart textiles and devices, bringing textile production into the realm of 21st century manufacturing.
Visualization and interaction tools for aerial photograph mosaics
NASA Astrophysics Data System (ADS)
Fernandes, João Pedro; Fonseca, Alexandra; Pereira, Luís; Faria, Adriano; Figueira, Helder; Henriques, Inês; Garção, Rita; Câmara, António
1997-05-01
This paper describes the development of a digital spatial library based on mosaics of digital orthophotos, called Interactive Portugal, that will enable users both to retrieve geospatial information existing in the Portuguese National System for Geographic Information World Wide Web server, and to develop local databases connected to the main system. A set of navigation, interaction, and visualization tools are proposed and discussed. They include sketching, dynamic sketching, and navigation capabilities over the digital orthophotos mosaics. Main applications of this digital spatial library are pointed out and discussed, namely for education, professional, and tourism markets. Future developments are considered. These developments are related to user reactions, technological advancements, and projects that also aim at delivering and exploring digital imagery on the World Wide Web. Future capabilities for site selection and change detection are also considered.
High-resolution Self-Organizing Maps for advanced visualization and dimension reduction.
Saraswati, Ayu; Nguyen, Van Tuc; Hagenbuchner, Markus; Tsoi, Ah Chung
2018-05-04
Kohonen's Self Organizing feature Map (SOM) provides an effective way to project high dimensional input features onto a low dimensional display space while preserving the topological relationships among the input features. Recent advances in algorithms that take advantages of modern computing hardware introduced the concept of high resolution SOMs (HRSOMs). This paper investigates the capabilities and applicability of the HRSOM as a visualization tool for cluster analysis and its suitabilities to serve as a pre-processor in ensemble learning models. The evaluation is conducted on a number of established benchmarks and real-world learning problems, namely, the policeman benchmark, two web spam detection problems, a network intrusion detection problem, and a malware detection problem. It is found that the visualization resulted from an HRSOM provides new insights concerning these learning problems. It is furthermore shown empirically that broad benefits from the use of HRSOMs in both clustering and classification problems can be expected. Copyright © 2018 Elsevier Ltd. All rights reserved.
Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.
2014-01-01
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver
2009-11-20
Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research coveredmore » in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle acceleration, physicists model LWFAs computationally. The datasets produced by LWFA simulations are (i) extremely large, (ii) of varying spatial and temporal resolution, (iii) heterogeneous, and (iv) high-dimensional, making analysis and knowledge discovery from complex LWFA simulation data a challenging task. To address these challenges this thesis describes the integration of the visualization system VisIt and the state-of-the-art index/query system FastBit, enabling interactive visual exploration of extremely large three-dimensional particle datasets. Researchers are especially interested in beams of high-energy particles formed during the course of a simulation. This thesis describes novel methods for automatic detection and analysis of particle beams enabling a more accurate and efficient data analysis process. By integrating these automated analysis methods with visualization, this research enables more accurate, efficient, and effective analysis of LWFA simulation data than previously possible.« less
An Update on Improvements to NiCE Support for PROTEUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Andrew; McCaskey, Alexander J.; Billings, Jay Jay
2015-09-01
The Department of Energy Office of Nuclear Energy's Nuclear Energy Advanced Modeling and Simulation (NEAMS) program has supported the development of the NEAMS Integrated Computational Environment (NiCE), a modeling and simulation workflow environment that provides services and plugins to facilitate tasks such as code execution, model input construction, visualization, and data analysis. This report details the development of workflows for the reactor core neutronics application, PROTEUS. This advanced neutronics application (primarily developed at Argonne National Laboratory) aims to improve nuclear reactor design and analysis by providing an extensible and massively parallel, finite-element solver for current and advanced reactor fuel neutronicsmore » modeling. The integration of PROTEUS-specific tools into NiCE is intended to make the advanced capabilities that PROTEUS provides more accessible to the nuclear energy research and development community. This report will detail the work done to improve existing PROTEUS workflow support in NiCE. We will demonstrate and discuss these improvements, including the development of flexible IO services, an improved interface for input generation, and the addition of advanced Fortran development tools natively in the platform.« less
Dasgupta, Aritra; Poco, Jorge; Wei, Yaxing; ...
2015-03-16
Evaluation methodologies in visualization have mostly focused on how well the tools and techniques cater to the analytical needs of the user. While this is important in determining the effectiveness of the tools and advancing the state-of-the-art in visualization research, a key area that has mostly been overlooked is how well established visualization theories and principles are instantiated in practice. This is especially relevant when domain experts, and not visualization researchers, design visualizations for analysis of their data or for broader dissemination of scientific knowledge. There is very little research on exploring the synergistic capabilities of cross-domain collaboration between domainmore » experts and visualization researchers. To fill this gap, in this paper we describe the results of an exploratory study of climate data visualizations conducted in tight collaboration with a pool of climate scientists. The study analyzes a large set of static climate data visualizations for identifying their shortcomings in terms of visualization design. The outcome of the study is a classification scheme that categorizes the design problems in the form of a descriptive taxonomy. The taxonomy is a first attempt for systematically categorizing the types, causes, and consequences of design problems in visualizations created by domain experts. We demonstrate the use of the taxonomy for a number of purposes, such as, improving the existing climate data visualizations, reflecting on the impact of the problems for enabling domain experts in designing better visualizations, and also learning about the gaps and opportunities for future visualization research. We demonstrate the applicability of our taxonomy through a number of examples and discuss the lessons learnt and implications of our findings.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Poco, Jorge; Wei, Yaxing
Evaluation methodologies in visualization have mostly focused on how well the tools and techniques cater to the analytical needs of the user. While this is important in determining the effectiveness of the tools and advancing the state-of-the-art in visualization research, a key area that has mostly been overlooked is how well established visualization theories and principles are instantiated in practice. This is especially relevant when domain experts, and not visualization researchers, design visualizations for analysis of their data or for broader dissemination of scientific knowledge. There is very little research on exploring the synergistic capabilities of cross-domain collaboration between domainmore » experts and visualization researchers. To fill this gap, in this paper we describe the results of an exploratory study of climate data visualizations conducted in tight collaboration with a pool of climate scientists. The study analyzes a large set of static climate data visualizations for identifying their shortcomings in terms of visualization design. The outcome of the study is a classification scheme that categorizes the design problems in the form of a descriptive taxonomy. The taxonomy is a first attempt for systematically categorizing the types, causes, and consequences of design problems in visualizations created by domain experts. We demonstrate the use of the taxonomy for a number of purposes, such as, improving the existing climate data visualizations, reflecting on the impact of the problems for enabling domain experts in designing better visualizations, and also learning about the gaps and opportunities for future visualization research. We demonstrate the applicability of our taxonomy through a number of examples and discuss the lessons learnt and implications of our findings.« less
From printed color to image appearance: tool for advertising assessment
NASA Astrophysics Data System (ADS)
Bonanomi, Cristian; Marini, Daniele; Rizzi, Alessandro
2012-07-01
We present a methodology to calculate the color appearance of advertising billboards set in indoor and outdoor environments, printed on different types of paper support and viewed under different illuminations. The aim is to simulate the visual appearance of an image printed on a specific support, observed in a certain context and illuminated with a specific source of light. Knowing in advance the visual rendering of an image in different conditions can avoid problems related to its visualization. The proposed method applies a sequence of transformations to convert a four channels image (CMYK) into a spectral one, considering the paper support, then it simulates the chosen illumination, and finally computes an estimation of the appearance.
VISIBIOweb: visualization and layout services for BioPAX pathway models
Dilek, Alptug; Belviranli, Mehmet E.; Dogrusoz, Ugur
2010-01-01
With recent advancements in techniques for cellular data acquisition, information on cellular processes has been increasing at a dramatic rate. Visualization is critical to analyzing and interpreting complex information; representing cellular processes or pathways is no exception. VISIBIOweb is a free, open-source, web-based pathway visualization and layout service for pathway models in BioPAX format. With VISIBIOweb, one can obtain well-laid-out views of pathway models using the standard notation of the Systems Biology Graphical Notation (SBGN), and can embed such views within one's web pages as desired. Pathway views may be navigated using zoom and scroll tools; pathway object properties, including any external database references available in the data, may be inspected interactively. The automatic layout component of VISIBIOweb may also be accessed programmatically from other tools using Hypertext Transfer Protocol (HTTP). The web site is free and open to all users and there is no login requirement. It is available at: http://visibioweb.patika.org. PMID:20460470
ERIC Educational Resources Information Center
Gokalp, Sumeyra; Horton, William; Jónsdóttir-Lewis, Elfa B.; Foster, Michelle; Török, Marianna
2018-01-01
To facilitate learning advanced instrumental techniques, essential tools for visualizing biomaterials, a simple and versatile laboratory exercise demonstrating the use of Atomic Force Microscopy (AFM) in biomedical applications was developed. In this experiment, the morphology of heat-denatured and amyloid-type aggregates formed from a low-cost…
Hands-On Data Analysis: Using 3D Printing to Visualize Reaction Progress Surfaces
ERIC Educational Resources Information Center
Higman, Carolyn S.; Situ, Henry; Blacklin, Peter; Hein, Jason E.
2017-01-01
Advances in 3D printing technology over the past decade have led to its expansion into all subfields of science, including chemistry. This technology provides useful teaching tools that facilitate communication of difficult chemical concepts to students and researchers. Presented here is the use of 3D printing technology to create tangible models…
Mining Data from Interactions with a Motivational-Aware Tutoring System Using Data Visualization
ERIC Educational Resources Information Center
Rebolledo-Mendez, Genaro; Du Boulay, Benedict; Luckin, Rosemary; Benitez-Guerrero, Edgard Ivan
2013-01-01
Tutoring systems are a common tool for delivering educational content and recent advances in this field include the detection of and reaction to learners' motivation. A data set derived from interactions in a tutoring system and its motivationally-aware variant provided opportunities to discover patterns of behavior in connection with motivational…
Multi-modal virtual environment research at Armstrong Laboratory
NASA Technical Reports Server (NTRS)
Eggleston, Robert G.
1995-01-01
One mission of the Paul M. Fitts Human Engineering Division of Armstrong Laboratory is to improve the user interface for complex systems through user-centered exploratory development and research activities. In support of this goal, many current projects attempt to advance and exploit user-interface concepts made possible by virtual reality (VR) technologies. Virtual environments may be used as a general purpose interface medium, an alternative display/control method, a data visualization and analysis tool, or a graphically based performance assessment tool. An overview is given of research projects within the division on prototype interface hardware/software development, integrated interface concept development, interface design and evaluation tool development, and user and mission performance evaluation tool development.
Advanced boundary layer transition measurement methods for flight applications
NASA Technical Reports Server (NTRS)
Holmes, B. J.; Croom, C. C.; Gail, P. D.; Manuel, G. S.; Carraway, D. L.
1986-01-01
In modern laminar flow flight research, it is important to understand the specific cause(s) of laminar to turbulent boundary-layer transition. Such information is crucial to the exploration of the limits of practical application of laminar flow for drag reduction on aircraft. The transition modes of interest in current flight investigations include the viscous Tollmien-Schlichting instability, the inflectional instability at laminar separation, and the crossflow inflectional instability, as well as others. This paper presents the results to date of research on advanced devices and methods used for the study of laminar boundary-layer transition phenomena in the flight environment. Recent advancements in the development of arrayed hot-film devices and of a new flow visualization method are discussed. Arrayed hot-film devices have been designed to detect the presence of laminar separation, and of crossflow vorticity. The advanced flow visualization method utilizes color changes in liquid-crystal coatings to detect boundary-layer transition at high altitude flight conditions. Flight and wind tunnel data are presented to illustrate the design and operation of these advanced methods. These new research tools provide information on disturbance growth and transition mode which is essential to furthering our understanding of practical design limits for applications of laminar flow technology.
Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric
2011-01-01
Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/
Concept of Operations Visualization for Ares I Production
NASA Technical Reports Server (NTRS)
Chilton, Jim; Smith, David Alan
2008-01-01
Establishing Computer Aided Design models of the Ares I production facility, tooling and vehicle components and integrating them into manufacturing visualizations/simulations allows Boeing and NASA to collaborate real time early in the design/development cycle. This collaboration identifies cost effective and lean solutions that can be easily shared with Ares stakeholders (e.g., other NASA Centers and potential science users). These Ares I production visualizations and analyses by their nature serve as early manufacturing improvement precursors for other Constellation elements to be built at the Michoud Assembly Facility such as Ares V and the Altair Lander. Key to this Boeing and Marshall Space Flight Center collaboration has been the use of advanced virtual manufacturing tools to understand the existing Shuttle era infrastructure and trade potential modifications to support Ares I production. These approaches are then used to determine an optimal manufacturing configuration in terms of labor efficiency, safety and facility enhancements. These same models and tools can be used in an interactive simulation of Ares I and V flight to the Space Station or moon to educate the human space constituency (e.g., government, academia, media and the public) in order to increase national and international understanding of Constellation goals and benefits.
Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric
2011-01-01
Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit – a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/ PMID:21713110
Toward visual user interfaces supporting collaborative multimedia content management
NASA Astrophysics Data System (ADS)
Husein, Fathi; Leissler, Martin; Hemmje, Matthias
2000-12-01
Supporting collaborative multimedia content management activities, as e.g., image and video acquisition, exploration, and access dialogues between naive users and multi media information systems is a non-trivial task. Although a wide variety of experimental and prototypical multimedia storage technologies as well as corresponding indexing and retrieval engines are available, most of them lack appropriate support for collaborative end-user oriented user interface front ends. The development of advanced user adaptable interfaces is necessary for building collaborative multimedia information- space presentations based upon advanced tools for information browsing, searching, filtering, and brokering to be applied on potentially very large and highly dynamic multimedia collections with a large number of users and user groups. Therefore, the development of advanced and at the same time adaptable and collaborative computer graphical information presentation schemes that allow to easily apply adequate visual metaphors for defined target user stereotypes has to become a key focus within ongoing research activities trying to support collaborative information work with multimedia collections.
Color-quality control using color-difference formulas: progress and problems
NASA Astrophysics Data System (ADS)
Melgosa, M.; Gómez-Robledo, L.; García, P. A.; Morillas, S.; Fernández-Maloigne, C.; Richard, N.; Huang, M.; Li, C.; Cui, G.
2017-08-01
We report on some recent advances in industrial color-difference evaluation focused in three main fields: Development of reliable experimental visual datasets; proposal of new color spaces and color-difference formulas; tools to evaluate the merits of color-difference formulas. The use of fuzzy techniques to assign consistency degrees to color pairs in combined visual datasets is described. The CIE/ISO joint proposal of the CIEDE2000 color-difference formula as a standard will facilitate the communication among companies and users. The CIE recommendation of the STRESS index to assess observers' variability and relative merits of different color-difference formulas is reported. Power functions are an efficient method to improve the performance of modern color-difference formulas. We need of advanced color-difference formulas accounting for new materials with different kind of textures and gonioapparent effects.
Imaging anatomy of the vestibular and visual systems.
Gunny, Roxana; Yousry, Tarek A
2007-02-01
This review will outline the imaging anatomy of the vestibular and visual pathways, using computed tomography and magnetic resonance imaging, with emphasis on the more recent developments in neuroimaging. Technical advances in computed tomography and magnetic resonance imaging, such as the advent of multislice computed tomography and newer magnetic resonance imaging techniques such as T2-weighted magnetic resonance cisternography, have improved the imaging of the vestibular and visual pathways, allowing better visualization of the end organs and peripheral nerves. Higher field strength magnetic resonance imaging is a promising tool, which has been used to evaluate and resolve fine anatomic detail in vitro, as in the labyrinth. Advanced magnetic resonance imaging techniques such as functional magnetic resonance imaging and diffusion tractography have been used to identify cortical areas of activation and associated white matter pathways, and show potential for the future identification of complex neuronal relays involved in integrating these pathways. The assessment of the various components of the vestibular and the visual systems has improved with more detailed research on the imaging anatomy of these systems, the advent of high field magnetic resonance scanners and multislice computerized tomography, and the wider use of specific techniques such as tractography which displays white matter tracts not directly accessible until now.
Chen, Yuting; Santos, Abel; Wang, Ye; Kumeria, Tushar; Ho, Daena; Li, Junsheng; Wang, Changhai; Losic, Dusan
2015-01-01
Herein, we present a systematic study on the development, optimisation and applicability of interferometrically coloured distributed Bragg reflectors based on nanoporous anodic alumina (NAA-DBRs) in the form of films and nanoporous microparticles as visual/colorimetric analytical tools. Firstly, we synthesise a complete palette of NAA-DBRs by galvanostatic pulse anodisation approach, in which the current density is altered in a periodic fashion in order to engineer the effective medium of the resulting photonic films in depth. NAA-DBR photonic films feature vivid colours that can be tuned across the UV-visible-NIR spectrum by structural engineering. Secondly, the effective medium of the resulting photonic films is assessed systematically by visual analysis and reflectometric interference spectroscopy (RIfS) in order to establish the most optimal nanoporous platforms to develop visual/colorimetric tools. Then, we demonstrate the applicability of NAA-DBR photonic films as a chemically selective sensing platform for visual detection of mercury(II) ions. Finally, we generate a new nanomaterial, so-called photonic dust, by breaking down NAA-DBRs films into nanoporous microparticles. The resulting microparticles (μP-NAA-DBRs) display vivid colours and are sensitive towards changes in their effective medium, opening new opportunities for developing advanced photonic nanotools for a broad range of applications. PMID:26245759
NASA Astrophysics Data System (ADS)
Chen, Yuting; Santos, Abel; Wang, Ye; Kumeria, Tushar; Ho, Daena; Li, Junsheng; Wang, Changhai; Losic, Dusan
2015-08-01
Herein, we present a systematic study on the development, optimisation and applicability of interferometrically coloured distributed Bragg reflectors based on nanoporous anodic alumina (NAA-DBRs) in the form of films and nanoporous microparticles as visual/colorimetric analytical tools. Firstly, we synthesise a complete palette of NAA-DBRs by galvanostatic pulse anodisation approach, in which the current density is altered in a periodic fashion in order to engineer the effective medium of the resulting photonic films in depth. NAA-DBR photonic films feature vivid colours that can be tuned across the UV-visible-NIR spectrum by structural engineering. Secondly, the effective medium of the resulting photonic films is assessed systematically by visual analysis and reflectometric interference spectroscopy (RIfS) in order to establish the most optimal nanoporous platforms to develop visual/colorimetric tools. Then, we demonstrate the applicability of NAA-DBR photonic films as a chemically selective sensing platform for visual detection of mercury(II) ions. Finally, we generate a new nanomaterial, so-called photonic dust, by breaking down NAA-DBRs films into nanoporous microparticles. The resulting microparticles (μP-NAA-DBRs) display vivid colours and are sensitive towards changes in their effective medium, opening new opportunities for developing advanced photonic nanotools for a broad range of applications.
A Virtual Map to Support People Who Are Blind in Navigation through Real Spaces
ERIC Educational Resources Information Center
Lahav, Orly; Schloerb, David W.; Kumar, Siddarth; Srinivasan, Mandayam A.
2011-01-01
Most of the spatial information needed by sighted people to construct cognitive maps of spaces is gathered through the visual channel. Unfortunately, people who are blind lack the ability to collect the required spatial information in advance. The use of virtual reality as a learning and rehabilitation tool for people with disabilities has been on…
Through the High-Tech Looking Glass | Center for Cancer Research
Science begins with observation; scientists have made telescopes to examine things farther away than the eye can see and microscopes to examine things invisible to human vision. Since Robert Hooke in the 17th century used the first microscope to document the existence of living cells, advances in cell biology have been tied to ever more innovative tools for visualizing and
Cool Tools for the New Frontier: Technological Advances Help Associates Tell Their Story.
ERIC Educational Resources Information Center
Hersch, James
1998-01-01
Argues that creation of a World Wide Web site that makes good use of the available digital audio and visual technologies can be useful in campus activities planning and advertising. The design of a good Web site and the potential uses of digital video and compact discs are discussed. Costs of these technologies are also outlined. (MSE)
DRIVE: Drive-Cycle Rapid Investigation, Visualization, and Evaluation
specialized statistical clustering methods. The duration of these representative drive cycles, which aim to , DRIVE can benefit a variety of users. For example: Fleet managers can use the tool to make educated investment decisions by determining, in advance, the payback period for a given technology. Vehicle
Kastberger, G; Kranner, G
2000-02-01
Viscovery SOMine is a software tool for advanced analysis and monitoring of numerical data sets. It was developed for professional use in business, industry, and science and to support dependency analysis, deviation detection, unsupervised clustering, nonlinear regression, data association, pattern recognition, and animated monitoring. Based on the concept of self-organizing maps (SOMs), it employs a robust variant of unsupervised neural networks--namely, Kohonen's Batch-SOM, which is further enhanced with a new scaling technique for speeding up the learning process. This tool provides a powerful means by which to analyze complex data sets without prior statistical knowledge. The data representation contained in the trained SOM is systematically converted to be used in a spectrum of visualization techniques, such as evaluating dependencies between components, investigating geometric properties of the data distribution, searching for clusters, or monitoring new data. We have used this software tool to analyze and visualize multiple influences of the ocellar system on free-flight behavior in giant honeybees. Occlusion of ocelli will affect orienting reactivities in relation to flight target, level of disturbance, and position of the bee in the flight chamber; it will induce phototaxis and make orienting imprecise and dependent on motivational settings. Ocelli permit the adjustment of orienting strategies to environmental demands by enforcing abilities such as centering or flight kinetics and by providing independent control of posture and flight course.
More than words: Using visual graphics for community-based health research.
Morton Ninomiya, Melody E
2017-04-20
With increased attention to knowledge translation and community engagement in the applied health research field, many researchers aim to find effective ways of engaging health policy and decision makers and community stakeholders. While visual graphics such as graphs, charts, figures and photographs are common in scientific research dissemination, they are less common as a communication tool in research. In this commentary, I illustrate how and why visual graphics were created and used to facilitate dialogue and communication throughout all phases of a community-based health research study with a rural Indigenous community, advancing community engagement and knowledge utilization of a research study. I suggest that it is essential that researchers consider the use of visual graphics to accurately communicate and translate important health research concepts and content in accessible forms for diverse research stakeholders and target audiences.
What can mice tell us about how vision works?
Huberman, Andrew D.; Niell, Cristopher M.
2012-01-01
Understanding the neural basis of visual perception is a longstanding fundamental goal of neuroscience. Historically, most vision studies were carried out on humans, macaque monkeys and cats. Over the last five years, however, a growing number of researchers have begun using mice to parse the mechanisms underlying visual processing- the rationale is that despite having relatively poor acuity, mice are unmatched in terms of the variety and sophistication of tools available to label, monitor and manipulate specific cell types and circuits. In this review, we discuss recent advances in understanding the mouse visual system at the anatomical, receptive field and perceptual level, focusing on the opportunities and constraints those features provide toward the goal of understanding how vision works. PMID:21840069
Evaluating an immersive virtual environment prototyping and simulation system
NASA Astrophysics Data System (ADS)
Nemire, Kenneth
1997-05-01
An immersive virtual environment (IVE) modeling and simulation tool is being developed for designing advanced weapon and training systems. One unique feature of the tool is that the design, and not just visualization of the design is accomplished with the IVE tool. Acceptance of IVE tools requires comparisons with current commercial applications. In this pilot study, expert users of a popular desktop 3D graphics application performed identical modeling and simulation tasks using both the desktop and IVE applications. The IVE tool consisted of a head-mounted display, 3D spatialized sound, spatial trackers on head and hands, instrumented gloves, and a simulated speech recognition system. The results are preliminary because performance from only four users has been examined. When using the IVE system, users completed the tasks to criteria in less time than when using the desktop application. Subjective ratings of the visual displays in each system were similar. Ratings for the desktop controls were higher than for the IVE controls. Ratings of immersion and user enjoyment were higher for the IVE than for the desktop application. These results are particular remarkable because participants had used the desktop application regularly for three to five years and the prototype IVE tool for only three to six hours.
Web-Based Geospatial Visualization of GPM Data with CesiumJS
NASA Technical Reports Server (NTRS)
Lammers, Matt
2018-01-01
Advancements in the capabilities of JavaScript frameworks and web browsing technology have made online visualization of large geospatial datasets such as those coming from precipitation satellites viable. These data benefit from being visualized on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS (http://cesiumjs.org), developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. This presentation will describe how CesiumJS has been used in three-dimensional visualization products developed as part of the NASA Precipitation Processing System (PPS) STORM data-order website. Existing methods of interacting with Global Precipitation Measurement (GPM) Mission data primarily focus on two-dimensional static images, whether displaying vertical slices or horizontal surface/height-level maps. These methods limit interactivity with the robust three-dimensional data coming from the GPM core satellite. Integrating the data with CesiumJS in a web-based user interface has allowed us to create the following products. We have linked with the data-order interface an on-the-fly visualization tool for any GPM/partner satellite orbit. A version of this tool also focuses on high-impact weather events. It enables viewing of combined radar and microwave-derived precipitation data on mobile devices and in a way that can be embedded into other websites. We also have used CesiumJS to visualize a method of integrating gridded precipitation data with modeled wind speeds that animates over time. Emphasis in the presentation will be placed on how a variety of technical methods were used to create these tools, and how the flexibility of the CesiumJS framework facilitates creative approaches to interact with the data.
Price, Jeffrey H; Goodacre, Angela; Hahn, Klaus; Hodgson, Louis; Hunter, Edward A; Krajewski, Stanislaw; Murphy, Robert F; Rabinovich, Andrew; Reed, John C; Heynen, Susanne
2002-01-01
Cellular behavior is complex. Successfully understanding systems at ever-increasing complexity is fundamental to advances in modern science and unraveling the functional details of cellular behavior is no exception. We present a collection of prospectives to provide a glimpse of the techniques that will aid in collecting, managing and utilizing information on complex cellular processes via molecular imaging tools. These include: 1) visualizing intracellular protein activity with fluorescent markers, 2) high throughput (and automated) imaging of multilabeled cells in statistically significant numbers, and 3) machine intelligence to analyze subcellular image localization and pattern. Although not addressed here, the importance of combining cell-image-based information with detailed molecular structure and ligand-receptor binding models cannot be overlooked. Advanced molecular imaging techniques have the potential to impact cellular diagnostics for cancer screening, clinical correlations of tissue molecular patterns for cancer biology, and cellular molecular interactions for accelerating drug discovery. The goal of finally understanding all cellular components and behaviors will be achieved by advances in both instrumentation engineering (software and hardware) and molecular biochemistry. Copyright 2002 Wiley-Liss, Inc.
Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Dean N.
2011-07-20
This report summarizes work carried out by the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT) Team for the period of January 1, 2011 through June 30, 2011. It discusses highlights, overall progress, period goals, and collaborations and lists papers and presentations. To learn more about our project, please visit our UV-CDAT website (URL: http://uv-cdat.org). This report will be forwarded to the program manager for the Department of Energy (DOE) Office of Biological and Environmental Research (BER), national and international collaborators and stakeholders, and to researchers working on a wide range of other climate model, reanalysis, and observation evaluation activities. Themore » UV-CDAT executive committee consists of Dean N. Williams of Lawrence Livermore National Laboratory (LLNL); Dave Bader and Galen Shipman of Oak Ridge National Laboratory (ORNL); Phil Jones and James Ahrens of Los Alamos National Laboratory (LANL), Claudio Silva of Polytechnic Institute of New York University (NYU-Poly); and Berk Geveci of Kitware, Inc. The UV-CDAT team consists of researchers and scientists with diverse domain knowledge whose home institutions also include the National Aeronautics and Space Administration (NASA) and the University of Utah. All work is accomplished under DOE open-source guidelines and in close collaboration with the project's stakeholders, domain researchers, and scientists. Working directly with BER climate science analysis projects, this consortium will develop and deploy data and computational resources useful to a wide variety of stakeholders, including scientists, policymakers, and the general public. Members of this consortium already collaborate with other institutions and universities in researching data discovery, management, visualization, workflow analysis, and provenance. The UV-CDAT team will address the following high-level visualization requirements: (1) Alternative parallel streaming statistics and analysis pipelines - Data parallelism, Task parallelism, Visualization parallelism; (2) Optimized parallel input/output (I/O); (3) Remote interactive execution; (4) Advanced intercomparison visualization; (5) Data provenance processing and capture; and (6) Interfaces for scientists - Workflow data analysis and visualization construction tools, and Visualization interfaces.« less
Training using a new multidirectional reach tool improves balance in individuals with stroke.
Khumsapsiri, Numpung; Siriphorn, Akkradate; Pooranawatthanakul, Kanokporn; Oungphalachai, Tanyarut
2018-04-01
Previous studies suggested that limits of stability (LOS) training with visual feedback using commercial equipment could be used to improve balance ability in individuals with stroke. However, this system is expensive. In this study, we created a new tool from inexpensive elements based on LOS training using visual feedback. The aim of this study was to investigate the effect of training using a new multidirectional reach tool on balance in individuals with stroke. A single-blind randomized control trial was conducted. Individuals with stroke (n = 16; age range 38-72 years) were recruited. Participants in the experimental group were trained with the multidirectional reach training for 30 min and conventional physical therapy for 30 min per day, 3 days a week for 4 weeks. Participants in the control group received conventional physical therapy for 30 min per day, 3 days a week for 4 weeks. The outcomes were LOS, weight-bearing squat, and Fullerton Advanced Balance scale. All of the outcome measures were measured at pretraining, post-training, and 1 month follow-up. At post-training and 1-month follow-up, the participants in the experimental group had an improvement of dynamic balance than the control group. Furthermore, the activity assessment by Fullerton Advanced Balance scale was more improved at 1 month follow-up in the experimental group than control group. The results of this study provide evidence that training using a new multidirectional reach tool is effective for improving balance in individuals with stroke. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Pontius, J.; Duncan, J.
2017-12-01
Land managers are often faced with balancing management activities to accomplish a diversity of management objectives, in systems faced with many stress agents. Advances in ecosystem modeling provide a rich source of information to inform management. Coupled with advances in decision support techniques and computing capabilities, interactive tools are now accessible for a broad audience of stakeholders. Here we present one such tool designed to capture information on how climate change may impact forested ecosystems, and how that impact varies spatially across the landscape. This tool integrates empirical models of current and future forest structure and function in a structured decision framework that allows users to customize weights for multiple management objectives and visualize suitability outcomes across the landscape. Combined with climate projections, the resulting products allow stakeholders to compare the relative success of various management objectives on a pixel by pixel basis and identify locations where management outcomes are most likely to be met. Here we demonstrate this approach with the integration of several of the preliminary models developed to map species distributions, sugar maple health, forest fragmentation risk and hemlock vulnerability to hemlock woolly adelgid under current and future climate scenarios. We compare three use case studies with objective weightings designed to: 1) Identify key parcels for sugarbush conservation and management, 2) Target state lands that may serve as hemlock refugia from hemlock woolly adelgid induced mortality, and 3) Examine how climate change may alter the success of managing for both sugarbush and hemlock across privately owned lands. This tool highlights the value of flexible models that can be easily run with customized weightings in a dynamic, integrated assessment that allows users to hone in on their potentially complex management objectives, and to visualize and prioritize locations across the landscape. It also demonstrates the importance of including climate considerations for long-term management. This merging of scientific knowledge with the diversity of stakeholder needs is an important step towards using science to inform management and policy decisions.
NASA Astrophysics Data System (ADS)
Ra, Hyejun; Gonzalez-Gonzalez, Emilio; Smith, Bryan R.; Gambhir, Sanjiv S.; Kino, Gordon S.; Solgaard, Olav; Kaspar, Roger L.; Contag, Christopher H.
2010-05-01
Transgenic reporter mice and advances in imaging instrumentation are enabling real-time visualization of cellular mechanisms in living subjects and accelerating the development of novel therapies. Innovative confocal microscope designs are improving their utility for microscopic imaging of fluorescent reporters in living animals. We develop dual-axis confocal (DAC) microscopes for such in vivo studies and create mouse models where fluorescent proteins are expressed in the skin for the purpose of advancing skin therapeutics and transdermal delivery tools. Three-dimensional image volumes, through the different skin compartments of the epidermis and dermis, can be acquired in several seconds with the DAC microscope in living mice, and are comparable to histologic analyses of reporter protein expression patterns in skin sections. Intravital imaging with the DAC microscope further enables visualization of green fluorescent protein (GFP) reporter gene expression in the skin over time, and quantification of transdermal delivery of small interfering RNA (siRNA) and therapeutic efficacy. Visualization of transdermal delivery of nucleic acids will play an important role in the development of innovative strategies for treating skin pathologies.
Joslin, A C; Green, R; German, J B; Lange, M C
2014-09-01
Advances in the development of bioinformatic tools continue to improve investigators' ability to interrogate, organize, and derive knowledge from large amounts of heterogeneous information. These tools often require advanced technical skills not possessed by life scientists. User-friendly, low-barrier-to-entry methods of visualizing nutrigenomics information are yet to be developed. We utilized concept mapping software from the Institute for Human and Machine Cognition to create a conceptual model of diet and health-related data that provides a foundation for future nutrigenomics ontologies describing published nutrient-gene/polymorphism-phenotype data. In this model, maps containing phenotype, nutrient, gene product, and genetic polymorphism interactions are visualized as triples of two concepts linked together by a linking phrase. These triples, or "knowledge propositions," contextualize aggregated data and information into easy-to-read knowledge maps. Maps of these triples enable visualization of genes spanning the One-Carbon Metabolism (OCM) pathway, their sequence variants, and multiple literature-mined associations including concepts relevant to nutrition, phenotypes, and health. The concept map development process documents the incongruity of information derived from pathway databases versus literature resources. This conceptual model highlights the importance of incorporating information about genes in upstream pathways that provide substrates, as well as downstream pathways that utilize products of the pathway under investigation, in this case OCM. Other genes and their polymorphisms, such as TCN2 and FUT2, although not directly involved in OCM, potentially alter OCM pathway functionality. These upstream gene products regulate substrates such as B12. Constellations of polymorphisms affecting the functionality of genes along OCM, together with substrate and cofactor availability, may impact resultant phenotypes. These conceptual maps provide a foundational framework for development of nutrient-gene/polymorphism-phenotype ontologies and systems visualization.
Visualization of small scale structures on high resolution DEMs
NASA Astrophysics Data System (ADS)
Kokalj, Žiga; Zakšek, Klemen; Pehani, Peter; Čotar, Klemen; Oštir, Krištof
2015-04-01
Knowledge on the terrain morphology is very important for observation of numerous processes and events and digital elevation models are therefore one of the most important datasets in geographic analyses. Furthermore, recognition of natural and anthropogenic microrelief structures, which can be observed on detailed terrain models derived from aerial laser scanning (lidar) or structure-from-motion photogrammetry, is of paramount importance in many applications. In this paper we thus examine and evaluate methods of raster lidar data visualization for the determination (recognition) of microrelief features and present a series of strategies to assist selecting the preferred visualization of choice for structures of various shapes and sizes, set in varied landscapes. Often the answer is not definite and more frequently a combination of techniques has to be used to map a very diverse landscape. Researchers can only very recently benefit from free software for calculation of advanced visualization techniques. These tools are often difficult to understand, have numerous options that confuse the user, or require and produce non-standard data formats, because they were written for specific purposes. We therefore designed the Relief Visualization Toolbox (RVT) as a free, easy-to-use, standalone application to create visualisations from high-resolution digital elevation data. It is tailored for the very beginners in relief interpretation, but it can also be used by more advanced users in data processing and geographic information systems. It offers a range of techniques, such as simple hillshading and its derivatives, slope gradient, trend removal, positive and negative openness, sky-view factor, and anisotropic sky-view factor. All included methods have been proven to be effective for detection of small scale features and the default settings are optimised to accomplish this task. However, the usability of the tool goes beyond computation for visualization purposes, as sky-view factor, for example, is an essential variable in many fields, e.g. in meteorology. RVT produces two types of results: 1) the original files have a full range of values and are intended for further analyses in geographic information systems, 2) the simplified versions are histogram stretched for visualization purposes and saved as 8-bit GeoTIFF files. This means that they can be explored in non-GIS software, e.g. with simple picture viewers, which is essential when a larger community of non-specialists needs to be considered, e.g. in public collaborative projects. The tool recognizes all frequently used single band raster formats and supports elevation raster file data conversion.
Biographer: web-based editing and rendering of SBGN compliant biochemical networks.
Krause, Falko; Schulz, Marvin; Ripkens, Ben; Flöttmann, Max; Krantz, Marcus; Klipp, Edda; Handorf, Thomas
2013-06-01
The rapid accumulation of knowledge in the field of Systems Biology during the past years requires advanced, but simple-to-use, methods for the visualization of information in a structured and easily comprehensible manner. We have developed biographer, a web-based renderer and editor for reaction networks, which can be integrated as a library into tools dealing with network-related information. Our software enables visualizations based on the emerging standard Systems Biology Graphical Notation. It is able to import networks encoded in various formats such as SBML, SBGN-ML and jSBGN, a custom lightweight exchange format. The core package is implemented in HTML5, CSS and JavaScript and can be used within any kind of web-based project. It features interactive graph-editing tools and automatic graph layout algorithms. In addition, we provide a standalone graph editor and a web server, which contains enhanced features like web services for the import and export of models and visualizations in different formats. The biographer tool can be used at and downloaded from the web page http://biographer.biologie.hu-berlin.de/. The different software packages, including a server-independent version as well as a web server for Windows and Linux based systems, are available at http://code.google.com/p/biographer/ under the open-source license LGPL
NASA Astrophysics Data System (ADS)
Masseroli, Marco; Pinciroli, Francesco
2000-12-01
To provide easy retrieval, integration and evaluation of multimodal cardiology images and data in a web browser environment, distributed application technologies and java programming were used to implement a client-server architecture based on software agents. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. The client side is a Java applet running in a web browser and providing a friendly medical user interface to perform queries on patient and medical test dat and integrate and visualize properly the various query results. A set of tools based on Java Advanced Imaging API enables to process and analyze the retrieved cardiology images, and quantify their features in different regions of interest. The platform-independence Java technology makes the developed prototype easy to be managed in a centralized form and provided in each site where an intranet or internet connection can be located. Giving the healthcare providers effective tools for querying, visualizing and evaluating comprehensively cardiology medical images and records in all locations where they can need them- i.e. emergency, operating theaters, ward, or even outpatient clinics- the developed prototype represents an important aid in providing more efficient diagnoses and medical treatments.
Park, George D; Reed, Catherine L
2015-10-01
Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.
NASA Astrophysics Data System (ADS)
Park, Young Woo; Guo, Bing; Mogensen, Monique; Wang, Kevin; Law, Meng; Liu, Brent
2010-03-01
When a patient is accepted in the emergency room suspected of stroke, time is of the utmost importance. The infarct brain area suffers irreparable damage as soon as three hours after the onset of stroke symptoms. A CT scan is one of standard first line of investigations with imaging and is crucial to identify and properly triage stroke cases. The availability of an expert Radiologist in the emergency environment to diagnose the stroke patient in a timely manner only increases the challenges within the clinical workflow. Therefore, a truly zero-footprint web-based system with powerful advanced visualization tools for volumetric imaging including 2D. MIP/MPR, 3D display can greatly facilitate this dynamic clinical workflow for stroke patients. Together with mobile technology, the proper visualization tools can be delivered at the point of decision anywhere and anytime. We will present a small pilot project to evaluate the use of mobile technologies using devices such as iPhones in evaluating stroke patients. The results of the evaluation as well as any challenges in setting up the system will also be discussed.
Multidimensional proteomics for cell biology.
Larance, Mark; Lamond, Angus I
2015-05-01
The proteome is a dynamic system in which each protein has interconnected properties - dimensions - that together contribute to the phenotype of a cell. Measuring these properties has proved challenging owing to their diversity and dynamic nature. Advances in mass spectrometry-based proteomics now enable the measurement of multiple properties for thousands of proteins, including their abundance, isoform expression, turnover rate, subcellular localization, post-translational modifications and interactions. Complementing these experimental developments are new data analysis, integration and visualization tools as well as data-sharing resources. Together, these advances in the multidimensional analysis of the proteome are transforming our understanding of various cellular and physiological processes.
NASA Technical Reports Server (NTRS)
Diak, George R.; Huang, Hung-Lung; Kim, Dongsoo
1990-01-01
The paper addresses the concept of synthetic satellite imagery as a visualization and diagnostic tool for understanding satellite sensors of the future and to detail preliminary results on the quality of soundings from the current sensors. Preliminary results are presented on the quality of soundings from the combination of the High-Resolution Infrared Radiometer Sounder and the Advanced Microwave Sounding Unit. Results are also presented on the first Observing System Simulation Experiment using this data in a mesoscale numerical prediction model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrell, William C.; Birkel, Garrett W.; Forrer, Mark
Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high-quality data to be parametrized and tested, which are not generally available. Here, we present the Experiment Data Depot (EDD), an online tool designed as a repository of experimental data and metadata. EDD provides a convenient way to upload a variety of data types, visualize these data, and export them in a standardized fashion for use with predictive algorithms. In this paper, we describe EDDmore » and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes.« less
Morrell, William C.; Birkel, Garrett W.; Forrer, Mark; ...
2017-08-21
Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high-quality data to be parametrized and tested, which are not generally available. Here, we present the Experiment Data Depot (EDD), an online tool designed as a repository of experimental data and metadata. EDD provides a convenient way to upload a variety of data types, visualize these data, and export them in a standardized fashion for use with predictive algorithms. In this paper, we describe EDDmore » and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes.« less
Morrell, William C; Birkel, Garrett W; Forrer, Mark; Lopez, Teresa; Backman, Tyler W H; Dussault, Michael; Petzold, Christopher J; Baidoo, Edward E K; Costello, Zak; Ando, David; Alonso-Gutierrez, Jorge; George, Kevin W; Mukhopadhyay, Aindrila; Vaino, Ian; Keasling, Jay D; Adams, Paul D; Hillson, Nathan J; Garcia Martin, Hector
2017-12-15
Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high-quality data to be parametrized and tested, which are not generally available. Here, we present the Experiment Data Depot (EDD), an online tool designed as a repository of experimental data and metadata. EDD provides a convenient way to upload a variety of data types, visualize these data, and export them in a standardized fashion for use with predictive algorithms. In this paper, we describe EDD and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes.
Ramanujan, Devarajan; Bernstein, William Z; Chandrasegaran, Senthil K; Ramani, Karthik
2017-01-01
The rapid rise in technologies for data collection has created an unmatched opportunity to advance the use of data-rich tools for lifecycle decision-making. However, the usefulness of these technologies is limited by the ability to translate lifecycle data into actionable insights for human decision-makers. This is especially true in the case of sustainable lifecycle design (SLD), as the assessment of environmental impacts, and the feasibility of making corresponding design changes, often relies on human expertise and intuition. Supporting human sense-making in SLD requires the use of both data-driven and user-driven methods while exploring lifecycle data. A promising approach for combining the two is through the use of visual analytics (VA) tools. Such tools can leverage the ability of computer-based tools to gather, process, and summarize data along with the ability of human-experts to guide analyses through domain knowledge or data-driven insight. In this paper, we review previous research that has created VA tools in SLD. We also highlight existing challenges and future opportunities for such tools in different lifecycle stages-design, manufacturing, distribution & supply chain, use-phase, end-of-life, as well as life cycle assessment. Our review shows that while the number of VA tools in SLD is relatively small, researchers are increasingly focusing on the subject matter. Our review also suggests that VA tools can address existing challenges in SLD and that significant future opportunities exist.
NASA Astrophysics Data System (ADS)
Overoye, D.; Lewis, C.; Butler, D. M.; Andersen, T. J.
2016-12-01
The Global Learning and Observations to Benefit the Environment (GLOBE) Program is a worldwide hands-on, primary and secondary school-based science and education program founded on Earth Day 1995. Implemented in 117 countries, GLOBE promotes the teaching and learning of science, supporting students, teachers and scientists worldwide to collaborate with each other on inquiry-based investigations of the Earth system. The GLOBE Data Information System (DIS) currently supports users with the ability to enter data from over 50 different science protocols. GLOBE's Data Access and Visualization tools have been developed to accommodate the need to display and retrieve data from this large number of protocols. The community of users is also diverse, including NASA scientists, citizen scientists and grade school students. The challenge for GLOBE is to meet the needs from this diverse set of users with protocol specific displays that are simple enough for a GLOBE school to use, but also provide enough features for a NASA Scientist to retrieve data sets they are interested in. During the last 3 years, the GLOBE visualization system has evolved to meet the needs of these various users, leveraging user feedback and technological advances. Further refinements and enhancements continue. In this session we review the design and capabilities of the GLOBE visualization and data retrieval tool set, discuss the evolution of these tools, and discuss coming directions.
Invariant visual object recognition and shape processing in rats
Zoccolan, Davide
2015-01-01
Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision. PMID:25561421
Advances in directional borehole radar data analysis and visualization
Smith, D.V.G.; Brown, P.J.
2002-01-01
The U.S. Geological Survey is developing a directional borehole radar (DBOR) tool for mapping fractures, lithologic changes, and underground utility and void detection. An important part of the development of the DBOR tool is data analysis and visualization, with the aim of making the software graphical user interface (GUI) intuitive and easy to use. The DBOR software system consists of a suite of signal and image processing routines written in Research Systems' Interactive Data Language (IDL). The software also serves as a front-end to many widely accepted Colorado School of Mines Center for Wave Phenomena (CWP) Seismic UNIX (SU) algorithms (Cohen and Stockwell, 2001). Although the SU collection runs natively in a UNIX environment, our system seamlessly emulates a UNIX session within a widely used PC operating system (MicroSoft Windows) using GNU tools (Noer, 1998). Examples are presented of laboratory data acquired with the prototype tool from two different experimental settings. The first experiment imaged plastic pipes in a macro-scale sand tank. The second experiment monitored the progress of an invasion front resulting from oil injection. Finally, challenges to further development and planned future work are discussed.
Mesoscale brain explorer, a flexible python-based image analysis and visualization tool.
Haupt, Dirk; Vanni, Matthieu P; Bolanos, Federico; Mitelut, Catalin; LeDue, Jeffrey M; Murphy, Tim H
2017-07-01
Imaging of mesoscale brain activity is used to map interactions between brain regions. This work has benefited from the pioneering studies of Grinvald et al., who employed optical methods to image brain function by exploiting the properties of intrinsic optical signals and small molecule voltage-sensitive dyes. Mesoscale interareal brain imaging techniques have been advanced by cell targeted and selective recombinant indicators of neuronal activity. Spontaneous resting state activity is often collected during mesoscale imaging to provide the basis for mapping of connectivity relationships using correlation. However, the information content of mesoscale datasets is vast and is only superficially presented in manuscripts given the need to constrain measurements to a fixed set of frequencies, regions of interest, and other parameters. We describe a new open source tool written in python, termed mesoscale brain explorer (MBE), which provides an interface to process and explore these large datasets. The platform supports automated image processing pipelines with the ability to assess multiple trials and combine data from different animals. The tool provides functions for temporal filtering, averaging, and visualization of functional connectivity relations using time-dependent correlation. Here, we describe the tool and show applications, where previously published datasets were reanalyzed using MBE.
NASA Astrophysics Data System (ADS)
Duffy, C.
2008-12-01
The future of environmental observing systems will utilize embedded sensor networks with continuous real- time measurement of hydrologic, atmospheric, biogeochemical, and ecological variables across diverse terrestrial environments. Embedded environmental sensors, benefitting from advances in information sciences, networking technology, materials science, computing capacity, and data synthesis methods, are undergoing revolutionary change. It is now possible to field spatially-distributed, multi-node sensor networks that provide density and spatial coverage previously accessible only via numerical simulation. At the same time, computational tools are advancing rapidly to the point where it is now possible to simulate the physical processes controlling individual parcels of water and solutes through the complete terrestrial water cycle. Our goal for the Penn State Critical Zone Observatory is to apply environmental sensor arrays, integrated hydrologic models, and state-of-the-art visualization deployed and coordinated at a testbed within the Penn State Experimental Forest. The Shale Hills Hydro_Sensorium prototype proposed here is designed to observe land-atmosphere interactions in four-dimensional (space and time). The term Hydro_Sensorium implies the totality of physical sensors, models and visualization tools that allow us to perceive the detailed space and time complexities of the water and energy cycle for a watershed or river basin for all physical states and fluxes (groundwater, soil moisture, temperature, streamflow, latent heat, snowmelt, chemistry, isotopes etc.). This research will ultimately catalyze the study of complex interactions between the land surface, subsurface, biological and atmospheric systems over a broad range of scales. The sensor array would be real-time and fully controllable by remote users for "computational steering" and data fusion. Presently fully-coupled physical models are being developed that link the atmosphere-land-vegetation-subsurface system into a fully-coupled distributed system. During the last 5 years the Penn State Integrated Hydrologic Modeling System has been under development as an open-source community modeling project funded by NSF EAR/GEO and NSF CBET/ENG. PIHM represents a strategy for the formulation and solution of fully-coupled process equations at the watershed and river basin scales, and includes a tightly coupled GIS tool for data handling, domain decomposition, optimal unstructured grid generation, and model parameterization. The sensor and simulation system has the following elements: 1) extensive, spatially-distributed, non- invasive, smart sensor networks to gather massive geologic, hydrologic, and geochemical data; 2) stochastic information fusion methods; 3) spatially-explicit multiphysics models/solutions of the land-vegetation- atmosphere system; and 4) asynchronous, parallel/distributed, adaptive algorithms for rapidly simulating the states of a basin at high resolution, 5) signal processing tools for data mining and parameter estimation, and 6) visualization tools. The prototype proposed sensor array and simulation system proposed here will offer a coherent new approach to environmental predictions with a fully integrated observing system design. We expect that the Shale Hills Hydro_Sensorium may provide the needed synthesis of information and conceptualization necessary to advance predictive understanding in complex hydrologic systems.
Interactive visualization to advance earthquake simulation
Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.
2008-01-01
The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.
Basu, Sumanta; Duren, William; Evans, Charles R; Burant, Charles F; Michailidis, George; Karnovsky, Alla
2017-05-15
Recent technological advances in mass spectrometry, development of richer mass spectral libraries and data processing tools have enabled large scale metabolic profiling. Biological interpretation of metabolomics studies heavily relies on knowledge-based tools that contain information about metabolic pathways. Incomplete coverage of different areas of metabolism and lack of information about non-canonical connections between metabolites limits the scope of applications of such tools. Furthermore, the presence of a large number of unknown features, which cannot be readily identified, but nonetheless can represent bona fide compounds, also considerably complicates biological interpretation of the data. Leveraging recent developments in the statistical analysis of high-dimensional data, we developed a new Debiased Sparse Partial Correlation algorithm (DSPC) for estimating partial correlation networks and implemented it as a Java-based CorrelationCalculator program. We also introduce a new version of our previously developed tool Metscape that enables building and visualization of correlation networks. We demonstrate the utility of these tools by constructing biologically relevant networks and in aiding identification of unknown compounds. http://metscape.med.umich.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.
An interactive visualization tool for mobile objects
NASA Astrophysics Data System (ADS)
Kobayashi, Tetsuo
Recent advancements in mobile devices---such as Global Positioning System (GPS), cellular phones, car navigation system, and radio-frequency identification (RFID)---have greatly influenced the nature and volume of data about individual-based movement in space and time. Due to the prevalence of mobile devices, vast amounts of mobile objects data are being produced and stored in databases, overwhelming the capacity of traditional spatial analytical methods. There is a growing need for discovering unexpected patterns, trends, and relationships that are hidden in the massive mobile objects data. Geographic visualization (GVis) and knowledge discovery in databases (KDD) are two major research fields that are associated with knowledge discovery and construction. Their major research challenges are the integration of GVis and KDD, enhancing the ability to handle large volume mobile objects data, and high interactivity between the computer and users of GVis and KDD tools. This dissertation proposes a visualization toolkit to enable highly interactive visual data exploration for mobile objects datasets. Vector algebraic representation and online analytical processing (OLAP) are utilized for managing and querying the mobile object data to accomplish high interactivity of the visualization tool. In addition, reconstructing trajectories at user-defined levels of temporal granularity with time aggregation methods allows exploration of the individual objects at different levels of movement generality. At a given level of generality, individual paths can be combined into synthetic summary paths based on three similarity measures, namely, locational similarity, directional similarity, and geometric similarity functions. A visualization toolkit based on the space-time cube concept exploits these functionalities to create a user-interactive environment for exploring mobile objects data. Furthermore, the characteristics of visualized trajectories are exported to be utilized for data mining, which leads to the integration of GVis and KDD. Case studies using three movement datasets (personal travel data survey in Lexington, Kentucky, wild chicken movement data in Thailand, and self-tracking data in Utah) demonstrate the potential of the system to extract meaningful patterns from the otherwise difficult to comprehend collections of space-time trajectories.
MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy
2016-11-01
We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes. Copyright © 2016. Published by Elsevier Inc.
FUn: a framework for interactive visualizations of large, high-dimensional datasets on the web.
Probst, Daniel; Reymond, Jean-Louis
2018-04-15
During the past decade, big data have become a major tool in scientific endeavors. Although statistical methods and algorithms are well-suited for analyzing and summarizing enormous amounts of data, the results do not allow for a visual inspection of the entire data. Current scientific software, including R packages and Python libraries such as ggplot2, matplotlib and plot.ly, do not support interactive visualizations of datasets exceeding 100 000 data points on the web. Other solutions enable the web-based visualization of big data only through data reduction or statistical representations. However, recent hardware developments, especially advancements in graphical processing units, allow for the rendering of millions of data points on a wide range of consumer hardware such as laptops, tablets and mobile phones. Similar to the challenges and opportunities brought to virtually every scientific field by big data, both the visualization of and interaction with copious amounts of data are both demanding and hold great promise. Here we present FUn, a framework consisting of a client (Faerun) and server (Underdark) module, facilitating the creation of web-based, interactive 3D visualizations of large datasets, enabling record level visual inspection. We also introduce a reference implementation providing access to SureChEMBL, a database containing patent information on more than 17 million chemical compounds. The source code and the most recent builds of Faerun and Underdark, Lore.js and the data preprocessing toolchain used in the reference implementation, are available on the project website (http://doc.gdb.tools/fun/). daniel.probst@dcb.unibe.ch or jean-louis.reymond@dcb.unibe.ch.
Advanced Prosthetic Gait Training Tool
2015-12-01
motion capture sequences was provided by MPL to CCAD and OGAL. CCAD’s work focused on imposing these sequences on the SantosTM digital human avatar ...manipulating the avatar image. These manipulations are accomplished in the context of reinforcing what is the more ideal position and relating...focus on the visual environment by asking users to manipulate a static image of the Santos avatar to represent their perception of what they observe
What Are the Usage Conditions of Web 2.0 Tools Faculty of Education Students?
ERIC Educational Resources Information Center
Agir, Ahmet
2014-01-01
As a result of advances in technology and then the emergence of using Internet in every step of life, web that provides access to the documents such as picture, audio, animation and text in Internet started to be used. At first, web consists of only visual and text pages that couldn't enable to make user's interaction. However, it is seen that not…
ERIC Educational Resources Information Center
Kerins, John; Ramsay, Allan
2012-01-01
This paper reports on the development of a prototype tool which shows how learners can be helped to reflect upon the accuracy of their writing. Analysis of samples of freely written texts by intermediate and advanced learners of English as a foreign language (EFL) showed evidence of weakness in the use of tense and aspect. Computational discourse…
Klinger, Daniel R; Reinard, Kevin A; Ajayi, Olaide O; Delashaw, Johnny B
2018-01-01
The binocular operating microscope has been the visualization instrument of choice for microsurgical clipping of intracranial aneurysms for many decades. To discuss recent technological advances that have provided novel visualization tools, which may prove to be superior to the binocular operating microscope in many regards. We present an operative video and our operative experience with the BrightMatterTM Servo System (Synaptive Medical, Toronto, Ontario, Canada) during the microsurgical clipping of an anterior communicating artery aneurysm. To the best of our knowledge, the use of this device for the microsurgical clipping of an intracranial aneurysm has never been described in the literature. The BrightMatterTM Servo System (Synaptive Medical) is a surgical exoscope which avoids many of the ergonomic constraints of the binocular operating microscope, but is associated with a steep learning curve. The BrightMatterTM Servo System (Synaptive Medical) is a maneuverable surgical exoscope that is positioned with a directional aiming device and a surgeon-controlled foot pedal. While utilizing this device comes with a steep learning curve typical of any new technology, the BrightMatterTM Servo System (Synaptive Medical) has several advantages over the conventional surgical microscope, which include a relatively unobstructed surgical field, provision of high-definition images, and visualization of difficult angles/trajectories. This device can easily be utilized as a visualization tool for a variety of cranial and spinal procedures in lieu of the binocular operating microscope. We anticipate that this technology will soon become an integral part of the neurosurgeon's armamentarium. Copyright © 2017 by the Congress of Neurological Surgeons
NASA Astrophysics Data System (ADS)
Duley, A. R.; Sullivan, D.; Fladeland, M. M.; Myers, J.; Craig, M.; Enomoto, F.; Van Gilst, D. P.; Johan, S.
2011-12-01
The Common Operations and Management Portal for Airborne Science Systems (COMPASS) project is a multi-center collaborative effort to advance and extend the research capabilities of the National Aeronautics and Space Administration's (NASA) Airborne Science Program (ASP). At its most basic, COMPASS provides tools for visualizing the position of aircraft and instrument observations during the course of a mission, and facilitates dissemination, discussion, and analysis and of multiple disparate data sources in order to more efficiently plan and execute airborne science missions. COMPASS targets a number of key objectives. First, deliver a common operating picture for improved shared situational awareness to all participants in NASA's Airborne Science missions. These participants include scientists, engineers, managers, and the general public. Second, encourage more responsive and collaborative measurements between instruments on multiple aircraft, satellites, and on the surface in order to increase the scientific value of these measurements. Fourth, provide flexible entry points for data providers to supply model and advanced analysis products to mission team members. Fifth, provide data consumers with a mechanism to ingest, search and display data products. Finally, embrace an open and transparent platform where common data products, services, and end user components can be shared with the broader scientific community. In pursuit of these objectives, and in concert with requirements solicited by the airborne science research community, the COMPASS project team has delivered a suite of core tools intended to represent the next generation toolset for airborne research. This toolset includes a collection of loosely coupled RESTful web-services, a system to curate, register, and search, commonly used data sources, end-user tools which leverage web socket and other next generation HTML5 technologies to aid real time aircraft position and data visualization, and an extensible a framework to rapidly accommodate mission specific requirements and mission tools.
NASA Astrophysics Data System (ADS)
Gopalan, A.; Doelling, D. R.; Scarino, B. R.; Chee, T.; Haney, C.; Bhatt, R.
2016-12-01
The CERES calibration group at NASA/LaRC has developed and deployed a suite of online data exploration and visualization tools targeted towards a range of spaceborne VIS/IR imager calibration applications for the Earth Science community. These web-based tools are driven by the open-source R (Language for Statistical Computing and Visualization) with a web interface for the user to customize the results according to their application. The tool contains a library of geostationary and sun-synchronous imager spectral response functions (SRF), incoming solar spectra, SCIAMACHY and Hyperion Earth reflected visible hyper-spectral data, and IASI IR hyper-spectral data. The suite of six specific web-based tools was designed to provide critical information necessary for sensor cross-calibration. One of the challenges of sensor cross-calibration is accounting for spectral band differences and may introduce biases if not handled properly. The spectral band adjustment factors (SBAF) are a function of the earth target, atmospheric and cloud conditions or scene type and angular conditions, when obtaining sensor radiance pairs. The SBAF will need to be customized for each inter-calibration target and sensor pair. The advantages of having a community open source tool are: 1) only one archive of SCIAMACHY, Hyperion, and IASI datasets needs to be maintained, which is on the order of 50TB. 2) the framework will allow easy incorporation of new satellite SRFs and hyper-spectral datasets and associated coincident atmospheric and cloud properties, such as PW. 3) web tool or SBAF algorithm improvements or suggestions when incorporated can benefit the community at large. 4) The customization effort is on the user rather than on the host. In this paper we discuss each of these tools in detail and explore the variety of advanced options that can be used to constrain the results along with specific use cases to highlight the value-added by these datasets.
Deriving Tools from Real-Time Runs: A New CCMC Support for SEC and AFWA
NASA Technical Reports Server (NTRS)
Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha
2007-01-01
The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-on-request" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities at the Space Environment Center, or at the Air Force Weather Agency.
Update: Advancement of Contact Dynamics Modeling for Human Spaceflight Simulation Applications
NASA Technical Reports Server (NTRS)
Brain, Thomas A.; Kovel, Erik B.; MacLean, John R.; Quiocho, Leslie J.
2017-01-01
Pong is a new software tool developed at the NASA Johnson Space Center that advances interference-based geometric contact dynamics based on 3D graphics models. The Pong software consists of three parts: a set of scripts to extract geometric data from 3D graphics models, a contact dynamics engine that provides collision detection and force calculations based on the extracted geometric data, and a set of scripts for visualizing the dynamics response with the 3D graphics models. The contact dynamics engine can be linked with an external multibody dynamics engine to provide an integrated multibody contact dynamics simulation. This paper provides a detailed overview of Pong including the overall approach and modeling capabilities, which encompasses force generation from contact primitives and friction to computational performance. Two specific Pong-based examples of International Space Station applications are discussed, and the related verification and validation using this new tool are also addressed.
Intelligent platforms for disease assessment: novel approaches in functional echocardiography.
Sengupta, Partho P
2013-11-01
Accelerating trends in the dynamic digital era (from 2004 onward) has resulted in the emergence of novel parametric imaging tools that allow easy and accurate extraction of quantitative information from cardiac images. This review principally attempts to heighten the awareness of newer emerging paradigms that may advance acquisition, visualization and interpretation of the large functional data sets obtained during cardiac ultrasound imaging. Incorporation of innovative cognitive software that allow advanced pattern recognition and disease forecasting will likely transform the human-machine interface and interpretation process to achieve a more efficient and effective work environment. Novel technologies for automation and big data analytics that are already active in other fields need to be rapidly adapted to the health care environment with new academic-industry collaborations to enrich and accelerate the delivery of newer decision making tools for enhancing patient care. Copyright © 2013. Published by Elsevier Inc.
Visualising Earth's Mantle based on Global Adjoint Tomography
NASA Astrophysics Data System (ADS)
Bozdag, E.; Pugmire, D.; Lefebvre, M. P.; Hill, J.; Komatitsch, D.; Peter, D. B.; Podhorszki, N.; Tromp, J.
2017-12-01
Recent advances in 3D wave propagation solvers and high-performance computing have enabled regional and global full-waveform inversions. Interpretation of tomographic models is often done on visually. Robust and efficient visualization tools are necessary to thoroughly investigate large model files, particularly at the global scale. In collaboration with Oak Ridge National Laboratory (ORNL), we have developed effective visualization tools and used for visualization of our first-generation global model, GLAD-M15 (Bozdag et al. 2016). VisIt (https://wci.llnl.gov/simulation/computer-codes/visit/) is used for initial exploration of the models and for extraction of seismological features. The broad capability of VisIt, and its demonstrated scalability proved valuable for experimenting with different visualization techniques, and in the creation of timely results. Utilizing VisIt's plugin-architecture, a data reader plugin was developed, which reads the ADIOS (https://www.olcf.ornl.gov/center-projects/adios/) format of our model files. Blender (https://www.blender.org) is used for the setup of lighting, materials, camera paths and rendering of geometry. Python scripting was used to control the orchestration of different geometries, as well as camera animation for 3D movies. While we continue producing 3D contour plots and movies for various seismic parameters to better visualize plume- and slab-like features as well as anisotropy throughout the mantle, our aim is to make visualization an integral part of our global adjoint tomography workflow to routinely produce various 2D cross-sections to facilitate examination of our models after each iteration. This will ultimately form the basis for use of pattern recognition techniques in our investigations. Simulations for global adjoint tomography are performed on ORNL's Titan system and visualization is done in parallel on ORNL's post-processing cluster Rhea.
[Visual acuity in anti-VEGF therapy for AMD : Can specific characteristics in the SD-OCT help?
Book, B; Ziegler, M; Heimes, B; Gutfleisch, M; Spital, G; Pauleikhoff, D; Lommatzsch, A
2017-01-01
The efficacy of anti-VEGF therapy in exudative AMD has been established in several large clinical trials using a fixed injection regimen as well as a SD-OCT-based PRN regimen. In these studies, after the first three injections, an increase of the mean visual acuity was observed, which could be stabilized with constant treatment for up to 24 months. However, the specific course of the visual acuity is very different between individuals. The aim of the present study was to correlate specific initial SD-OCT parameters with the course of visual acuity in order to characterize factors that may be important for the individual visual prognosis. In a prospective case study, the visual course and SD-OCT changes of 156 patients with minimum follow-up of 12 months (mean 80.1 months) were analysed. Visual acuity (LogMar) was investigated at regular intervals and correlated with specific SD-OCT parameters (foveal thickness, height of sub-retinal fluid or presence of associated PED, presence of intra-retinal cysts, length of IS/OS break, choroidal thickness). The initial increase in visual acuity could be stabilized over time. This effect was associated with a decrease in foveal retinal thickness, which also persisted over time. While sub-retinal fluid, presence of PED, and choroidal thickness showed no prognostic relevance for the change in visual acuity, the presence of more advanced central retinal thickness, of intra-retinal cysts or a longer break in the IS/OS junction were associated with a less favourable development of visual acuity. In the present study, the presence of more advanced central retinal thickness, of intra-retinal cysts or a larger IS/OS break correlated significantly with a worse visual prognosis. These might be clinical signs for more extensive pre-existing intra-retinal changes. Further analysis and new diagnostic tools may prove this and may result in specific additive neuroprotective or regenerative therapeutic approaches in exudative AMD.
Construct and face validity of a virtual reality-based camera navigation curriculum.
Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J
2012-10-01
Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P < 0.05). In the individual modules, coordination required 13.3 attempts for novices, 4.2 for intermediates, and 1.7 for the advanced group (P < 0.05). Target visualization required 19.3 attempts for novices, 13.2 for intermediates, and 8.2 for the advanced group (P < 0.05). Participants believe that training improves camera handling skills (95%), is relevant to surgery (95%), and is a valid training tool (93%). Graphics (98%) and realism (93%) were highly regarded. The VR-based camera navigation curriculum demonstrates construct and face validity for our training population. Camera navigation simulation may be a valuable tool that can be integrated into training protocols for residents and medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.
Teaching an Old Client New Tricks - the GloVIS Global Visualization Viewer after 14 Years
NASA Astrophysics Data System (ADS)
Meyer, D. J.; Steinwand, D.; Lemig, K.; Davis, B.; Werpy, J.; Quenzer, R.
2014-12-01
The US Geological Survey's Global Visualization Viewer (GloVIS) is a web-based, visual search and discovery tool used to access imagery from aircraft and space-based imaging systems. GloVIS was introduced shortly after the launch of Landsat 7 to provide a visual client to select images squired by the Enhanced Thematic Mapper Plus. Since then, it has been expanded to search on other Landsat imagery (Multi-spectral Scanner, Thematic Mapper, Operational Land Imager), imagery from a variety of NASA instruments (Moderate Resolution Imaging Spectroradiometer, Advanced Spaceborne Thermal Emissions and Reflection Radiometer, Advanced Land Imager, Hyperion), along with images from high-resolution airborne photography and special collections representing decades-long observations. GloVIS incorporated a number of features considered novel at its original release, such as rapid visual browse, and the ability to use one type of satellite observation (e.g., vegetation seasonality curves derived from the Advanced Very High Resolution Radiometer) to assist in the selection of another (e.g., Landsat). After 14 years, the GloVIS client has gained a large following, having served millions of images to hundreds of thousands of users, but is due for a major re-design. Described here are a set of guiding principles driving the re-design, the methodology used to understand how users discover and retrieve imagery, and candidate technologies to be leveraged in the re-design. The guiding principles include (1) visual co-discovery - the ability to browse and select imagery from diverse sources simultaneously; (2) user-centric design - understanding user needs prior to design and involving users throughout the design process; (3) adaptability - the use of flexible design to permit rapid incorporation of new capabilities, and (4) interoperability - the use of services, conventions and protocols to permit interaction with external sources of Earth science imagery.
Advanced Engineering Technology for Measuring Performance.
Rutherford, Drew N; D'Angelo, Anne-Lise D; Law, Katherine E; Pugh, Carla M
2015-08-01
The demand for competency-based assessments in surgical training is growing. Use of advanced engineering technology for clinical skills assessment allows for objective measures of hands-on performance. Clinical performance can be assessed in several ways via quantification of an assessee's hand movements (motion tracking), direction of visual attention (eye tracking), levels of stress (physiologic marker measurements), and location and pressure of palpation (force measurements). Innovations in video recording technology and qualitative analysis tools allow for a combination of observer- and technology-based assessments. Overall the goal is to create better assessments of surgical performance with robust validity evidence. Copyright © 2015 Elsevier Inc. All rights reserved.
Power Plant Model Validation Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
The PPMV is used to validate generator model using disturbance recordings. The PPMV tool contains a collection of power plant models and model validation studies, as well as disturbance recordings from a number of historic grid events. The user can import data from a new disturbance into the database, which converts PMU and SCADA data into GE PSLF format, and then run the tool to validate (or invalidate) the model for a specific power plant against its actual performance. The PNNL PPMV tool enables the automation of the process of power plant model validation using disturbance recordings. The tool usesmore » PMU and SCADA measurements as input information. The tool automatically adjusts all required EPCL scripts and interacts with GE PSLF in the batch mode. The main tool features includes: The tool interacts with GE PSLF; The tool uses GE PSLF Play-In Function for generator model validation; Database of projects (model validation studies); Database of the historic events; Database of the power plant; The tool has advanced visualization capabilities; and The tool automatically generates reports« less
Biographer: web-based editing and rendering of SBGN compliant biochemical networks
Krause, Falko; Schulz, Marvin; Ripkens, Ben; Flöttmann, Max; Krantz, Marcus; Klipp, Edda; Handorf, Thomas
2013-01-01
Motivation: The rapid accumulation of knowledge in the field of Systems Biology during the past years requires advanced, but simple-to-use, methods for the visualization of information in a structured and easily comprehensible manner. Results: We have developed biographer, a web-based renderer and editor for reaction networks, which can be integrated as a library into tools dealing with network-related information. Our software enables visualizations based on the emerging standard Systems Biology Graphical Notation. It is able to import networks encoded in various formats such as SBML, SBGN-ML and jSBGN, a custom lightweight exchange format. The core package is implemented in HTML5, CSS and JavaScript and can be used within any kind of web-based project. It features interactive graph-editing tools and automatic graph layout algorithms. In addition, we provide a standalone graph editor and a web server, which contains enhanced features like web services for the import and export of models and visualizations in different formats. Availability: The biographer tool can be used at and downloaded from the web page http://biographer.biologie.hu-berlin.de/. The different software packages, including a server-indepenent version as well as a web server for Windows and Linux based systems, are available at http://code.google.com/p/biographer/ under the open-source license LGPL. Contact: edda.klipp@biologie.hu-berlin.de or handorf@physik.hu-berlin.de PMID:23574737
CRAVE: a database, middleware and visualization system for phenotype ontologies.
Gkoutos, Georgios V; Green, Eain C J; Greenaway, Simon; Blake, Andrew; Mallon, Ann-Marie; Hancock, John M
2005-04-01
A major challenge in modern biology is to link genome sequence information to organismal function. In many organisms this is being done by characterizing phenotypes resulting from mutations. Efficiently expressing phenotypic information requires combinatorial use of ontologies. However tools are not currently available to visualize combinations of ontologies. Here we describe CRAVE (Concept Relation Assay Value Explorer), a package allowing storage, active updating and visualization of multiple ontologies. CRAVE is a web-accessible JAVA application that accesses an underlying MySQL database of ontologies via a JAVA persistent middleware layer (Chameleon). This maps the database tables into discrete JAVA classes and creates memory resident, interlinked objects corresponding to the ontology data. These JAVA objects are accessed via calls through the middleware's application programming interface. CRAVE allows simultaneous display and linking of multiple ontologies and searching using Boolean and advanced searches.
Pazzaglia, Mariella; Galli, Giulia
2015-01-01
The bidirectional flow of perceptual and motor information has recently proven useful as rehabilitative tool for re-building motor memories. We analyzed how the visual-motor approach has been successfully applied in neurorehabilitation, leading to surprisingly rapid and effective improvements in action execution. We proposed that the contribution of multiple sensory channels during treatment enables individuals to predict and optimize motor behavior, having a greater effect than visual input alone. We explored how the state-of-the-art neuroscience techniques show direct evidence that employment of visual-motor approach leads to increased motor cortex excitability and synaptic and cortical map plasticity. This super-additive response to multimodal stimulation may maximize neural plasticity, potentiating the effect of conventional treatment, and will be a valuable approach when it comes to advances in innovative methodologies.
Generating community-built tools for data sharing and analysis in environmental networks
Read, Jordan S.; Gries, Corinna; Read, Emily K.; Klug, Jennifer; Hanson, Paul C.; Hipsey, Matthew R.; Jennings, Eleanor; O'Reilley, Catherine; Winslow, Luke A.; Pierson, Don; McBride, Christopher G.; Hamilton, David
2016-01-01
Rapid data growth in many environmental sectors has necessitated tools to manage and analyze these data. The development of tools often lags behind the proliferation of data, however, which may slow exploratory opportunities and scientific progress. The Global Lake Ecological Observatory Network (GLEON) collaborative model supports an efficient and comprehensive data–analysis–insight life cycle, including implementations of data quality control checks, statistical calculations/derivations, models, and data visualizations. These tools are community-built and openly shared. We discuss the network structure that enables tool development and a culture of sharing, leading to optimized output from limited resources. Specifically, data sharing and a flat collaborative structure encourage the development of tools that enable scientific insights from these data. Here we provide a cross-section of scientific advances derived from global-scale analyses in GLEON. We document enhancements to science capabilities made possible by the development of analytical tools and highlight opportunities to expand this framework to benefit other environmental networks.
Cardiac imaging: working towards fully-automated machine analysis & interpretation.
Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido
2017-03-01
Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.
Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device.
Holub, Joseph; Winer, Eliot
2017-12-01
Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.
Cancer-disease associations: A visualization and animation through medical big data.
Iqbal, Usman; Hsu, Chun-Kung; Nguyen, Phung Anh Alex; Clinciu, Daniel Livius; Lu, Richard; Syed-Abdul, Shabbir; Yang, Hsuan-Chia; Wang, Yao-Chin; Huang, Chu-Ya; Huang, Chih-Wei; Chang, Yo-Cheng; Hsu, Min-Huei; Jian, Wen-Shan; Li, Yu-Chuan Jack
2016-04-01
Cancer is the primary disease responsible for death and disability worldwide. Currently, prevention and early detection represents the best hope for cure. Knowing the expected diseases that occur with a particular cancer in advance could lead to physicians being able to better tailor their treatment for cancer. The aim of this study was to build an animated visualization tool called as Cancer Associations Map Animation (CAMA), to chart the association of cancers with other disease over time. The study population was collected from the Taiwan National Health Insurance Database during the period January 2000 to December 2002, 782 million outpatient visits were used to compute the associations of nine major cancers with other diseases. A motion chart was used to quantify and visualize the associations between diseases and cancers. The CAMA motion chart that was built successfully facilitated the observation of cancer-disease associations across ages and genders. The CAMA system can be accessed online at http://203.71.86.98/web/runq16.html. The CAMA animation system is an animated medical data visualization tool which provides a dynamic, time-lapse, animated view of cancer-disease associations across different age groups and gender. Derived from a large, nationwide healthcare dataset, this exploratory data analysis tool can detect cancer comorbidities earlier than is possible by manual inspection. Taking into account the trajectory of cancer-specific comorbidity development may facilitate clinicians and healthcare researchers to more efficiently explore early stage hypotheses, develop new cancer treatment approaches, and identify potential effect modifiers or new risk factors associated with specific cancers. Copyright © 2016. Published by Elsevier Ireland Ltd.
Lee, Taein; Cheng, Chun-Huai; Ficklin, Stephen; Yu, Jing; Humann, Jodi; Main, Dorrie
2017-01-01
Abstract Tripal is an open-source database platform primarily used for development of genomic, genetic and breeding databases. We report here on the release of the Chado Loader, Chado Data Display and Chado Search modules to extend the functionality of the core Tripal modules. These new extension modules provide additional tools for (1) data loading, (2) customized visualization and (3) advanced search functions for supported data types such as organism, marker, QTL/Mendelian Trait Loci, germplasm, map, project, phenotype, genotype and their respective metadata. The Chado Loader module provides data collection templates in Excel with defined metadata and data loaders with front end forms. The Chado Data Display module contains tools to visualize each data type and the metadata which can be used as is or customized as desired. The Chado Search module provides search and download functionality for the supported data types. Also included are the tools to visualize map and species summary. The use of materialized views in the Chado Search module enables better performance as well as flexibility of data modeling in Chado, allowing existing Tripal databases with different metadata types to utilize the module. These Tripal Extension modules are implemented in the Genome Database for Rosaceae (rosaceae.org), CottonGen (cottongen.org), Citrus Genome Database (citrusgenomedb.org), Genome Database for Vaccinium (vaccinium.org) and the Cool Season Food Legume Database (coolseasonfoodlegume.org). Database URL: https://www.citrusgenomedb.org/, https://www.coolseasonfoodlegume.org/, https://www.cottongen.org/, https://www.rosaceae.org/, https://www.vaccinium.org/
U.S. Geological Survey: A synopsis of Three-dimensional Modeling
Jacobsen, Linda J.; Glynn, Pierre D.; Phelps, Geoff A.; Orndorff, Randall C.; Bawden, Gerald W.; Grauch, V.J.S.
2011-01-01
The U.S. Geological Survey (USGS) is a multidisciplinary agency that provides assessments of natural resources (geological, hydrological, biological), the disturbances that affect those resources, and the disturbances that affect the built environment, natural landscapes, and human society. Until now, USGS map products have been generated and distributed primarily as 2-D maps, occasionally providing cross sections or overlays, but rarely allowing the ability to characterize and understand 3-D systems, how they change over time (4-D), and how they interact. And yet, technological advances in monitoring natural resources and the environment, the ever-increasing diversity of information needed for holistic assessments, and the intrinsic 3-D/4-D nature of the information obtained increases our need to generate, verify, analyze, interpret, confirm, store, and distribute its scientific information and products using 3-D/4-D visualization, analysis, modeling tools, and information frameworks. Today, USGS scientists use 3-D/4-D tools to (1) visualize and interpret geological information, (2) verify the data, and (3) verify their interpretations and models. 3-D/4-D visualization can be a powerful quality control tool in the analysis of large, multidimensional data sets. USGS scientists use 3-D/4-D technology for 3-D surface (i.e., 2.5-D) visualization as well as for 3-D volumetric analyses. Examples of geological mapping in 3-D include characterization of the subsurface for resource assessments, such as aquifer characterization in the central United States, and for input into process models, such as seismic hazards in the western United States.
Tool for Sizing Analysis of the Advanced Life Support System
NASA Technical Reports Server (NTRS)
Yeh, Hue-Hsie Jannivine; Brown, Cheryl B.; Jeng, Frank J.
2005-01-01
Advanced Life Support Sizing Analysis Tool (ALSSAT) is a computer model for sizing and analyzing designs of environmental-control and life support systems (ECLSS) for spacecraft and surface habitats involved in the exploration of Mars and Moon. It performs conceptual designs of advanced life support (ALS) subsystems that utilize physicochemical and biological processes to recycle air and water, and process wastes in order to reduce the need of resource resupply. By assuming steady-state operations, ALSSAT is a means of investigating combinations of such subsystems technologies and thereby assisting in determining the most cost-effective technology combination available. In fact, ALSSAT can perform sizing analysis of the ALS subsystems that are operated dynamically or steady in nature. Using the Microsoft Excel spreadsheet software with Visual Basic programming language, ALSSAT has been developed to perform multiple-case trade studies based on the calculated ECLSS mass, volume, power, and Equivalent System Mass, as well as parametric studies by varying the input parameters. ALSSAT s modular format is specifically designed for the ease of future maintenance and upgrades.
The Role of Motor Learning in Spatial Adaptation near a Tool
Brown, Liana E.; Doole, Robert; Malfait, Nicole
2011-01-01
Some visual-tactile (bimodal) cells have visual receptive fields (vRFs) that overlap and extend moderately beyond the skin of the hand. Neurophysiological evidence suggests, however, that a vRF will grow to encompass a hand-held tool following active tool use but not after passive holding. Why does active tool use, and not passive holding, lead to spatial adaptation near a tool? We asked whether spatial adaptation could be the result of motor or visual experience with the tool, and we distinguished between these alternatives by isolating motor from visual experience with the tool. Participants learned to use a novel, weighted tool. The active training group received both motor and visual experience with the tool, the passive training group received visual experience with the tool, but no motor experience, and finally, a no-training control group received neither visual nor motor experience using the tool. After training, we used a cueing paradigm to measure how quickly participants detected targets, varying whether the tool was placed near or far from the target display. Only the active training group detected targets more quickly when the tool was placed near, rather than far, from the target display. This effect of tool location was not present for either the passive-training or control groups. These results suggest that motor learning influences how visual space around the tool is represented. PMID:22174944
Integrated network analysis and effective tools in plant systems biology
Fukushima, Atsushi; Kanaya, Shigehiko; Nishida, Kozo
2014-01-01
One of the ultimate goals in plant systems biology is to elucidate the genotype-phenotype relationship in plant cellular systems. Integrated network analysis that combines omics data with mathematical models has received particular attention. Here we focus on the latest cutting-edge computational advances that facilitate their combination. We highlight (1) network visualization tools, (2) pathway analyses, (3) genome-scale metabolic reconstruction, and (4) the integration of high-throughput experimental data and mathematical models. Multi-omics data that contain the genome, transcriptome, proteome, and metabolome and mathematical models are expected to integrate and expand our knowledge of complex plant metabolisms. PMID:25408696
Leaving the structural ivory tower, assisted by interactive 3D PDF.
Kumar, Pravin; Ziegler, Alexander; Grahn, Alexander; Hee, Chee Seng; Ziegler, Andreas
2010-08-01
The ability to embed interactive three-dimensional (3D) models into electronic publications in portable document format (PDF) greatly enhances the accessibility of molecular structures. Here, we report advances in this procedure and discuss what is needed to develop this format into a truly useful tool for the structural biology community as well as for readers who are less well trained in molecular visualization. Copyright 2010 Elsevier Ltd. All rights reserved.
MASH Suite Pro: A Comprehensive Software Tool for Top-Down Proteomics*
Cai, Wenxuan; Guner, Huseyin; Gregorich, Zachery R.; Chen, Albert J.; Ayaz-Guner, Serife; Peng, Ying; Valeja, Santosh G.; Liu, Xiaowen; Ge, Ying
2016-01-01
Top-down mass spectrometry (MS)-based proteomics is arguably a disruptive technology for the comprehensive analysis of all proteoforms arising from genetic variation, alternative splicing, and posttranslational modifications (PTMs). However, the complexity of top-down high-resolution mass spectra presents a significant challenge for data analysis. In contrast to the well-developed software packages available for data analysis in bottom-up proteomics, the data analysis tools in top-down proteomics remain underdeveloped. Moreover, despite recent efforts to develop algorithms and tools for the deconvolution of top-down high-resolution mass spectra and the identification of proteins from complex mixtures, a multifunctional software platform, which allows for the identification, quantitation, and characterization of proteoforms with visual validation, is still lacking. Herein, we have developed MASH Suite Pro, a comprehensive software tool for top-down proteomics with multifaceted functionality. MASH Suite Pro is capable of processing high-resolution MS and tandem MS (MS/MS) data using two deconvolution algorithms to optimize protein identification results. In addition, MASH Suite Pro allows for the characterization of PTMs and sequence variations, as well as the relative quantitation of multiple proteoforms in different experimental conditions. The program also provides visualization components for validation and correction of the computational outputs. Furthermore, MASH Suite Pro facilitates data reporting and presentation via direct output of the graphics. Thus, MASH Suite Pro significantly simplifies and speeds up the interpretation of high-resolution top-down proteomics data by integrating tools for protein identification, quantitation, characterization, and visual validation into a customizable and user-friendly interface. We envision that MASH Suite Pro will play an integral role in advancing the burgeoning field of top-down proteomics. PMID:26598644
Nikolian, Vahagn C; Ibrahim, Andrew M
2017-09-01
Journals fill several important roles within academic medicine, including building knowledge, validating quality of methods, and communicating research. This section provides an overview of these roles and highlights innovative approaches journals have taken to enhance dissemination of research. As journals move away from print formats and embrace web-based content, design-centered thinking will allow for engagement of a larger audience. Examples of recent efforts in this realm are provided, as well as simplified strategies for developing visual abstracts to improve dissemination via social media. Finally, we hone in on principles of learning and education which have driven these advances in multimedia-based communication in scientific research.
Healey, Robert D; Wojciechowski, Jonathan P; Monserrat-Martinez, Ana; Tan, Susan L; Marquis, Christopher P; Sierecki, Emma; Gambin, Yann; Finch, Angela M; Thordarson, Pall
2018-02-21
A G protein-coupled receptor (GPCR) agonist protein, thaumatin, was site-specifically conjugated at the N- or C-terminus with a fluorophore for visualization of GPCR:agonist interactions. The N-terminus was specifically conjugated using a synthetic 2-pyridinecarboxyaldehyde reagent. The interaction profiles observed for N- and C-terminal conjugates were varied; N-terminal conjugates interacted very weakly with the GPCR of interest, whereas C-terminal conjugates bound to the receptor. These chemical biology tools allow interactions of therapeutic proteins:GPCR to be monitored and visualized. The methodology used for site-specific bioconjugation represents an advance in application of 2-pyridinecarboxyaldehydes for N-terminal specific bioconjugations.
Web-Based Tools for Data Visualization and Decision Support for South Asia
NASA Astrophysics Data System (ADS)
Jones, N.; Nelson, J.; Pulla, S. T.; Ames, D. P.; Souffront, M.; David, C. H.; Zaitchik, B. F.; Gatlin, P. N.; Matin, M. A.
2017-12-01
The objective of the NASA SERVIR project is to assist developing countries in using information provided by Earth observing satellites to assess and manage climate risks, land use, and water resources. We present a collection of web apps that integrate earth observations and in situ data to facilitate deployment of data and water resources models as decision-making tools in support of this effort. The interactive nature of web apps makes this an excellent medium for creating decision support tools that harness cutting edge modeling techniques. Thin client apps hosted in a cloud portal eliminates the need for the decision makers to procure and maintain the high performance hardware required by the models, deal with issues related to software installation and platform incompatibilities, or monitor and install software updates, a problem that is exacerbated for many of the regional SERVIR hubs where both financial and technical capacity may be limited. All that is needed to use the system is an Internet connection and a web browser. We take advantage of these technologies to develop tools which can be centrally maintained but openly accessible. Advanced mapping and visualization make results intuitive and information derived actionable. We also take advantage of the emerging standards for sharing water information across the web using the OGC and WMO approved WaterML standards. This makes our tools interoperable and extensible via application programming interfaces (APIs) so that tools and data from other projects can both consume and share the tools developed in our project. Our approach enables the integration of multiple types of data and models, thus facilitating collaboration between science teams in SERVIR. The apps developed thus far by our team process time-varying netCDF files from Earth observations and large-scale computer simulations and allow visualization and exploration via raster animation and extraction of time series at selected points and/or regions.
Engaging Patients With Advance Directives Using an Information Visualization Approach.
Woollen, Janet; Bakken, Suzanne
2016-01-01
Despite the benefits of advance directives (AD) to patients and care providers, they are often not completed due to lack of patient awareness. The purpose of the current article is to advocate for creation and use of an innovative information visualization (infovisual) as a health communication tool aimed at improving AD dissemination and engagement. The infovisual would promote AD awareness by encouraging patients to learn about their options and inspire contemplation and conversation regarding their end-of-life (EOL) journey. An infovisual may be able to communicate insights that are often communicated in words, but are much more powerfully communicated by example. Furthermore, an infovisual could facilitate vivid understanding of options and inspire the beginning of often difficult conversations among care providers, patients, and loved ones. It may also save clinicians time, as care providers may be able to spend less time explaining details of EOL care options. Use of an infovisual could assist in ensuring a well-planned EOL journey. Copyright 2016, SLACK Incorporated.
Traenkle, Bjoern; Rothbauer, Ulrich
2017-01-01
Single-domain antibodies (sdAbs) have substantially expanded the possibilities of advanced cellular imaging such as live-cell or super-resolution microscopy to visualize cellular antigens and their dynamics. In addition to their unique properties including small size, high stability, and solubility in many environments, sdAbs can be efficiently functionalized according to the needs of the respective imaging approach. Genetically encoded intrabodies fused to fluorescent proteins (chromobodies) have become versatile tools to study dynamics of endogenous proteins in living cells. Additionally, sdAbs conjugated to organic dyes were shown to label cellular structures with high density and minimal fluorophore displacement making them highly attractive probes for super-resolution microscopy. Here, we review recent advances of the chromobody technology to visualize localization and dynamics of cellular targets and the application of chromobody-based cell models for compound screening. Acknowledging the emerging importance of super-resolution microscopy in cell biology, we further discuss advantages and challenges of sdAbs for this technology.
Interaction Junk: User Interaction-Based Evaluation of Visual Analytic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; North, Chris
2012-10-14
With the growing need for visualization to aid users in understanding large, complex datasets, the ability for users to interact and explore these datasets is critical. As visual analytic systems have advanced to leverage powerful computational models and data analytics capabilities, the modes by which users engage and interact with the information are limited. Often, users are taxed with directly manipulating parameters of these models through traditional GUIs (e.g., using sliders to directly manipulate the value of a parameter). However, the purpose of user interaction in visual analytic systems is to enable visual data exploration – where users can focusmore » on their task, as opposed to the tool or system. As a result, users can engage freely in data exploration and decision-making, for the purpose of gaining insight. In this position paper, we discuss how evaluating visual analytic systems can be approached through user interaction analysis, where the goal is to minimize the cognitive translation between the visual metaphor and the mode of interaction (i.e., reducing the “Interactionjunk”). We motivate this concept through a discussion of traditional GUIs used in visual analytics for direct manipulation of model parameters, and the importance of designing interactions the support visual data exploration.« less
Demir, E; Babur, O; Dogrusoz, U; Gursoy, A; Nisanci, G; Cetin-Atalay, R; Ozturk, M
2002-07-01
Availability of the sequences of entire genomes shifts the scientific curiosity towards the identification of function of the genomes in large scale as in genome studies. In the near future, data produced about cellular processes at molecular level will accumulate with an accelerating rate as a result of proteomics studies. In this regard, it is essential to develop tools for storing, integrating, accessing, and analyzing this data effectively. We define an ontology for a comprehensive representation of cellular events. The ontology presented here enables integration of fragmented or incomplete pathway information and supports manipulation and incorporation of the stored data, as well as multiple levels of abstraction. Based on this ontology, we present the architecture of an integrated environment named Patika (Pathway Analysis Tool for Integration and Knowledge Acquisition). Patika is composed of a server-side, scalable, object-oriented database and client-side editors to provide an integrated, multi-user environment for visualizing and manipulating network of cellular events. This tool features automated pathway layout, functional computation support, advanced querying and a user-friendly graphical interface. We expect that Patika will be a valuable tool for rapid knowledge acquisition, microarray generated large-scale data interpretation, disease gene identification, and drug development. A prototype of Patika is available upon request from the authors.
Wired Widgets: Agile Visualization for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Gerschefske, K.; Witmer, J.
2012-09-01
Continued advancement in sensors and analysis techniques have resulted in a wealth of Space Situational Awareness (SSA) data, made available via tools and Service Oriented Architectures (SOA) such as those in the Joint Space Operations Center Mission Systems (JMS) environment. Current visualization software cannot quickly adapt to rapidly changing missions and data, preventing operators and analysts from performing their jobs effectively. The value of this wealth of SSA data is not fully realized, as the operators' existing software is not built with the flexibility to consume new or changing sources of data or to rapidly customize their visualization as the mission evolves. While tools like the JMS user-defined operational picture (UDOP) have begun to fill this gap, this paper presents a further evolution, leveraging Web 2.0 technologies for maximum agility. We demonstrate a flexible Web widget framework with inter-widget data sharing, publish-subscribe eventing, and an API providing the basis for consumption of new data sources and adaptable visualization. Wired Widgets offers cross-portal widgets along with a widget communication framework and development toolkit for rapid new widget development, giving operators the ability to answer relevant questions as the mission evolves. Wired Widgets has been applied in a number of dynamic mission domains including disaster response, combat operations, and noncombatant evacuation scenarios. The variety of applications demonstrate that Wired Widgets provides a flexible, data driven solution for visualization in changing environments. In this paper, we show how, deployed in the Ozone Widget Framework portal environment, Wired Widgets can provide an agile, web-based visualization to support the SSA mission. Furthermore, we discuss how the tenets of agile visualization can generally be applied to the SSA problem space to provide operators flexibility, potentially informing future acquisition and system development.
A survey of tools for variant analysis of next-generation genome sequencing data
Pabinger, Stephan; Dander, Andreas; Fischer, Maria; Snajder, Rene; Sperk, Michael; Efremova, Mirjana; Krabichler, Birgit; Speicher, Michael R.; Zschocke, Johannes
2014-01-01
Recent advances in genome sequencing technologies provide unprecedented opportunities to characterize individual genomic landscapes and identify mutations relevant for diagnosis and therapy. Specifically, whole-exome sequencing using next-generation sequencing (NGS) technologies is gaining popularity in the human genetics community due to the moderate costs, manageable data amounts and straightforward interpretation of analysis results. While whole-exome and, in the near future, whole-genome sequencing are becoming commodities, data analysis still poses significant challenges and led to the development of a plethora of tools supporting specific parts of the analysis workflow or providing a complete solution. Here, we surveyed 205 tools for whole-genome/whole-exome sequencing data analysis supporting five distinct analytical steps: quality assessment, alignment, variant identification, variant annotation and visualization. We report an overview of the functionality, features and specific requirements of the individual tools. We then selected 32 programs for variant identification, variant annotation and visualization, which were subjected to hands-on evaluation using four data sets: one set of exome data from two patients with a rare disease for testing identification of germline mutations, two cancer data sets for testing variant callers for somatic mutations, copy number variations and structural variations, and one semi-synthetic data set for testing identification of copy number variations. Our comprehensive survey and evaluation of NGS tools provides a valuable guideline for human geneticists working on Mendelian disorders, complex diseases and cancers. PMID:23341494
Interactive Visualization to Advance Earthquake Simulation
NASA Astrophysics Data System (ADS)
Kellogg, Louise H.; Bawden, Gerald W.; Bernardin, Tony; Billen, Magali; Cowgill, Eric; Hamann, Bernd; Jadamec, Margarete; Kreylos, Oliver; Staadt, Oliver; Sumner, Dawn
2008-04-01
The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth’s surface and interior. Virtual mapping tools allow virtual “field studies” in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method’s strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations.
Intuitive Visualization of Transient Flow: Towards a Full 3D Tool
NASA Astrophysics Data System (ADS)
Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph
2015-04-01
Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool. Currently STRING can generate animations of single 2D cuts, either planar or curved surfaces, through 3D simulation domains. To provide a general tool for experts enabling also direct exploration and analysis of large 3D flow fields the software needs to be extended to intuitive as well as interactive visualizations of entire 3D flow domains. The current research concerning this project, which is funded by the Federal Ministry for Economic Affairs and Energy (Germany), is presented.
Novel Scientific Visualization Interfaces for Interactive Information Visualization and Sharing
NASA Astrophysics Data System (ADS)
Demir, I.; Krajewski, W. F.
2012-12-01
As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools in the Iowa Flood Information System (IFIS), developed within the light of these challenges. The IFIS is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS. 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods.
Visualizing protein partnerships in living cells and organisms.
Lowder, Melissa A; Appelbaum, Jacob S; Hobert, Elissa M; Schepartz, Alanna
2011-12-01
In recent years, scientists have expanded their focus from cataloging genes to characterizing the multiple states of their translated products. One anticipated result is a dynamic map of the protein association networks and activities that occur within the cellular environment. While in vitro-derived network maps can illustrate which of a multitude of possible protein-protein associations could exist, they supply a falsely static picture lacking the subtleties of subcellular location (where) or cellular state (when). Generating protein association network maps that are informed by both subcellular location and cell state requires novel approaches that accurately characterize the state of protein associations in living cells and provide precise spatiotemporal resolution. In this review, we highlight recent advances in visualizing protein associations and networks under increasingly native conditions. These advances include second generation protein complementation assays (PCAs), chemical and photo-crosslinking techniques, and proximity-induced ligation approaches. The advances described focus on background reduction, signal optimization, rapid and reversible reporter assembly, decreased cytotoxicity, and minimal functional perturbation. Key breakthroughs have addressed many challenges and should expand the repertoire of tools useful for generating maps of protein interactions resolved in both time and space. Copyright © 2011 Elsevier Ltd. All rights reserved.
Using Micro-Synchrophasor Data for Advanced Distribution Grid Planning and Operations Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Emma; Kiliccote, Sila; McParland, Charles
2014-07-01
This report reviews the potential for distribution-grid phase-angle data that will be available from new micro-synchrophasors (µPMUs) to be utilized in existing distribution-grid planning and operations analysis. This data could augment the current diagnostic capabilities of grid analysis software, used in both planning and operations for applications such as fault location, and provide data for more accurate modeling of the distribution system. µPMUs are new distribution-grid sensors that will advance measurement and diagnostic capabilities and provide improved visibility of the distribution grid, enabling analysis of the grid’s increasingly complex loads that include features such as large volumes of distributed generation.more » Large volumes of DG leads to concerns on continued reliable operation of the grid, due to changing power flow characteristics and active generation, with its own protection and control capabilities. Using µPMU data on change in voltage phase angle between two points in conjunction with new and existing distribution-grid planning and operational tools is expected to enable model validation, state estimation, fault location, and renewable resource/load characterization. Our findings include: data measurement is outstripping the processing capabilities of planning and operational tools; not every tool can visualize a voltage phase-angle measurement to the degree of accuracy measured by advanced sensors, and the degree of accuracy in measurement required for the distribution grid is not defined; solving methods cannot handle the high volumes of data generated by modern sensors, so new models and solving methods (such as graph trace analysis) are needed; standardization of sensor-data communications platforms in planning and applications tools would allow integration of different vendors’ sensors and advanced measurement devices. In addition, data from advanced sources such as µPMUs could be used to validate models to improve/ensure accuracy, providing information on normally estimated values such as underground conductor impedance, and characterization of complex loads. Although the input of high-fidelity data to existing tools will be challenging, µPMU data on phase angle (as well as other data from advanced sensors) will be useful for basic operational decisions that are based on a trend of changing data.« less
A web server for mining Comparative Genomic Hybridization (CGH) data
NASA Astrophysics Data System (ADS)
Liu, Jun; Ranka, Sanjay; Kahveci, Tamer
2007-11-01
Advances in cytogenetics and molecular biology has established that chromosomal alterations are critical in the pathogenesis of human cancer. Recurrent chromosomal alterations provide cytological and molecular markers for the diagnosis and prognosis of disease. They also facilitate the identification of genes that are important in carcinogenesis, which in the future may help in the development of targeted therapy. A large amount of publicly available cancer genetic data is now available and it is growing. There is a need for public domain tools that allow users to analyze their data and visualize the results. This chapter describes a web based software tool that will allow researchers to analyze and visualize Comparative Genomic Hybridization (CGH) datasets. It employs novel data mining methodologies for clustering and classification of CGH datasets as well as algorithms for identifying important markers (small set of genomic intervals with aberrations) that are potentially cancer signatures. The developed software will help in understanding the relationships between genomic aberrations and cancer types.
NASA Technical Reports Server (NTRS)
2008-01-01
NASA s advanced visual simulations are essential for analyses associated with life cycle planning, design, training, testing, operations, and evaluation. Kennedy Space Center, in particular, uses simulations for ground services and space exploration planning in an effort to reduce risk and costs while improving safety and performance. However, it has been difficult to circulate and share the results of simulation tools among the field centers, and distance and travel expenses have made timely collaboration even harder. In response, NASA joined with Valador Inc. to develop the Distributed Observer Network (DON), a collaborative environment that leverages game technology to bring 3-D simulations to conventional desktop and laptop computers. DON enables teams of engineers working on design and operations to view and collaborate on 3-D representations of data generated by authoritative tools. DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3-D visual environment. Multiple widely dispersed users, working individually or in groups, can view and analyze simulation results on desktop and laptop computers in real time.
AULA-Advanced Virtual Reality Tool for the Assessment of Attention: Normative Study in Spain.
Iriarte, Yahaira; Diaz-Orueta, Unai; Cueto, Eduardo; Irazustabarrena, Paula; Banterla, Flavio; Climent, Gema
2016-06-01
The present study describes the obtention of normative data for the AULA test, a virtual reality tool designed to evaluate attention problems, especially in children and adolescents. The normative sample comprised 1,272 participants (48.2% female) with an age range from 6 to 16 years (M = 10.25, SD = 2.83). The AULA test administered to them shows both visual and auditory stimuli, while randomized distractors of ecological nature appear progressively. Variables provided by AULA were clustered in different categories for their posterior analysis. Differences by age and gender were analyzed, resulting in 14 groups, 7 per sex group. Differences between visual and auditory attention were also obtained. Obtained normative data are relevant for the use of AULA for evaluating attention in Spanish children and adolescents in a more ecological way. Further studies will be needed to determine sensitivity and specificity of AULA to measure attention in different clinical populations. (J. of Att. Dis. 2016; 20(6) 542-568). © The Author(s) 2012.
Label-free imaging of gold nanoparticles in single live cells by photoacoustic microscopy
NASA Astrophysics Data System (ADS)
Tian, Chao; Qian, Wei; Shao, Xia; Xie, Zhixing; Cheng, Xu; Liu, Shengchun; Cheng, Qian; Liu, Bing; Wang, Xueding
2016-03-01
Gold nanoparticles (AuNPs) have been extensively explored as a model nanostructure in nanomedicine and have been widely used to provide advanced biomedical research tools in diagnostic imaging and therapy. Due to the necessity of targeting AuNPs to individual cells, evaluation and visualization of AuNPs in the cellular level is critical to fully understand their interaction with cellular environment. Currently imaging technologies, such as fluorescence microscopy and transmission electron microscopy all have advantages and disadvantages. In this paper, we synthesized AuNPs by femtosecond pulsed laser ablation, modified their surface chemistry through sequential bioconjugation, and targeted the functionalized AuNPs with individual cancer cells. Based on their high optical absorption contrast, we developed a novel, label-free imaging method to evaluate and visualize intracellular AuNPs using photoacoustic microscopy (PAM). Preliminary study shows that the PAM imaging technique is capable of imaging cellular uptake of AuNPs in vivo at single-cell resolution, which provide an important tool for the study of AuNPs in nanomedicine.
Mapping and Modeling Web Portal to Advance Global Monitoring and Climate Research
NASA Astrophysics Data System (ADS)
Chang, G.; Malhotra, S.; Bui, B.; Sadaqathulla, S.; Goodale, C. E.; Ramirez, P.; Kim, R. M.; Rodriguez, L.; Law, E.
2011-12-01
Today, the principal investigators of NASA Earth Science missions develop their own software to manipulate, visualize, and analyze the data collected from Earth, space, and airborne observation instruments. There is very little, if any, collaboration among these principal investigators due to the lack of collaborative tools, which would allow these scientists to share data and results. At NASA's Jet Propulsion Laboratory (JPL), under the Lunar Mapping and Modeling Project (LMMP), we have built a web portal that exposes a set of common services to users to allow search, visualization, subset, and download lunar science data. Users also have access to a set of tools that visualize, analyze and annotate the data. These services are developed according to industry standards for data access and manipulation, such REST and Open Geospatial Consortium (OGC) web services. As a result, users can access the datasets through custom written applications or off-the-shelf applications such as Google Earth. Even though it's currently used to store and process lunar data, this web portal infrastructure has been designed to support other solar system bodies such as asteroids and planets, including Earth. The infrastructure uses a combination of custom, commercial, and open-source software as well as off-the-shelf hardware and pay-by-use cloud computing services. The use of standardized web service interfaces facilitates platform and application-independent access to the services and data. For instance, we have software clients for the LMMP portal that provide a rich browsing and analysis experience from a variety of platforms including iOS and Android mobile platforms and large screen multi-touch displays with 3-D terrain viewing functions. The service-oriented architecture and design principles utilized in the implementation of the portal lends itself to be reusable and scalable and could naturally be extended to include a collaborative environment that enables scientists and principal investigators to share their research and analysis seamlessly. In addition, this extension will allow users to easily share their tools and data, and to enrich their mapping and analysis experiences. In this talk, we will describe the advanced data management and portal technologies used to power this collaborative environment. We will further illustrate how this environment can enable, enhance and advance global monitoring and climate research.
Reliability of visual and instrumental color matching.
Igiel, Christopher; Lehmann, Karl Martin; Ghinea, Razvan; Weyhrauch, Michael; Hangx, Ysbrand; Scheller, Herbert; Paravina, Rade D
2017-09-01
The aim of this investigation was to evaluate intra-rater and inter-rater reliability of visual and instrumental shade matching. Forty individuals with normal color perception participated in this study. The right maxillary central incisor of a teaching model was prepared and restored with 10 feldspathic all-ceramic crowns of different shades. A shade matching session consisted of the observer (rater) visually selecting the best match by using VITA classical A1-D4 (VC) and VITA Toothguide 3D Master (3D) shade guides and the VITA Easyshade Advance intraoral spectrophotometer (ES) to obtain both VC and 3D matches. Three shade matching sessions were held with 4 to 6 weeks between sessions. Intra-rater reliability was assessed based on the percentage of agreement for the three sessions for the same observer, whereas the inter-rater reliability was calculated as mean percentage of agreement between different observers. The Fleiss' Kappa statistical analysis was used to evaluate visual inter-rater reliability. The mean intra-rater reliability for the visual shade selection was 64(11) for VC and 48(10) for 3D. The corresponding ES values were 96(4) for both VC and 3D. The percentages of observers who matched the same shade with VC and 3D were 55(10) and 43(12), respectively, while corresponding ES values were 88(8) for VC and 92(4) for 3D. The results for visual shade matching exhibited a high to moderate level of inconsistency for both intra-rater and inter-rater comparisons. The VITA Easyshade Advance intraoral spectrophotometer exhibited significantly better reliability compared with visual shade selection. This study evaluates the ability of observers to consistently match the same shade visually and with a dental spectrophotometer in different sessions. The intra-rater and inter-rater reliability (agreement of repeated shade matching) of visual and instrumental tooth color matching strongly suggest the use of color matching instruments as a supplementary tool in everyday dental practice to enhance the esthetic outcome. © 2017 Wiley Periodicals, Inc.
Integrating natural language processing and web GIS for interactive knowledge domain visualization
NASA Astrophysics Data System (ADS)
Du, Fangming
Recent years have seen a powerful shift towards data-rich environments throughout society. This has extended to a change in how the artifacts and products of scientific knowledge production can be analyzed and understood. Bottom-up approaches are on the rise that combine access to huge amounts of academic publications with advanced computer graphics and data processing tools, including natural language processing. Knowledge domain visualization is one of those multi-technology approaches, with its aim of turning domain-specific human knowledge into highly visual representations in order to better understand the structure and evolution of domain knowledge. For example, network visualizations built from co-author relations contained in academic publications can provide insight on how scholars collaborate with each other in one or multiple domains, and visualizations built from the text content of articles can help us understand the topical structure of knowledge domains. These knowledge domain visualizations need to support interactive viewing and exploration by users. Such spatialization efforts are increasingly looking to geography and GIS as a source of metaphors and practical technology solutions, even when non-georeferenced information is managed, analyzed, and visualized. When it comes to deploying spatialized representations online, web mapping and web GIS can provide practical technology solutions for interactive viewing of knowledge domain visualizations, from panning and zooming to the overlay of additional information. This thesis presents a novel combination of advanced natural language processing - in the form of topic modeling - with dimensionality reduction through self-organizing maps and the deployment of web mapping/GIS technology towards intuitive, GIS-like, exploration of a knowledge domain visualization. A complete workflow is proposed and implemented that processes any corpus of input text documents into a map form and leverages a web application framework to let users explore knowledge domain maps interactively. This workflow is implemented and demonstrated for a data set of more than 66,000 conference abstracts.
Genoviz Software Development Kit: Java tool kit for building genomics visualization applications.
Helt, Gregg A; Nicol, John W; Erwin, Ed; Blossom, Eric; Blanchard, Steven G; Chervitz, Stephen A; Harmon, Cyrus; Loraine, Ann E
2009-08-25
Visualization software can expose previously undiscovered patterns in genomic data and advance biological science. The Genoviz Software Development Kit (SDK) is an open source, Java-based framework designed for rapid assembly of visualization software applications for genomics. The Genoviz SDK framework provides a mechanism for incorporating adaptive, dynamic zooming into applications, a desirable feature of genome viewers. Visualization capabilities of the Genoviz SDK include automated layout of features along genetic or genomic axes; support for user interactions with graphical elements (Glyphs) in a map; a variety of Glyph sub-classes that promote experimentation with new ways of representing data in graphical formats; and support for adaptive, semantic zooming, whereby objects change their appearance depending on zoom level and zooming rate adapts to the current scale. Freely available demonstration and production quality applications, including the Integrated Genome Browser, illustrate Genoviz SDK capabilities. Separation between graphics components and genomic data models makes it easy for developers to add visualization capability to pre-existing applications or build new applications using third-party data models. Source code, documentation, sample applications, and tutorials are available at http://genoviz.sourceforge.net/.
Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets
Gratzl, Samuel; Gehlenborg, Nils; Lex, Alexander; Pfister, Hanspeter; Streit, Marc
2016-01-01
Answering questions about complex issues often requires analysts to take into account information contained in multiple interconnected datasets. A common strategy in analyzing and visualizing large and heterogeneous data is dividing it into meaningful subsets. Interesting subsets can then be selected and the associated data and the relationships between the subsets visualized. However, neither the extraction and manipulation nor the comparison of subsets is well supported by state-of-the-art techniques. In this paper we present Domino, a novel multiform visualization technique for effectively representing subsets and the relationships between them. By providing comprehensive tools to arrange, combine, and extract subsets, Domino allows users to create both common visualization techniques and advanced visualizations tailored to specific use cases. In addition to the novel technique, we present an implementation that enables analysts to manage the wide range of options that our approach offers. Innovative interactive features such as placeholders and live previews support rapid creation of complex analysis setups. We introduce the technique and the implementation using a simple example and demonstrate scalability and effectiveness in a use case from the field of cancer genomics. PMID:26356916
Informing Regional Water-Energy-Food Nexus with System Analysis and Interactive Visualizations
NASA Astrophysics Data System (ADS)
Yang, Y. C. E.; Wi, S.
2016-12-01
Communicating scientific results to non-technical practitioners is challenging due to their differing interests, concerns and agendas. It is further complicated by the growing number of relevant factors that need to be considered, such as climate change and demographic dynamic. Visualization is an effective method for the scientific community to disseminate results, and it represents an opportunity for the future of water resources systems analysis (WRSA). This study demonstrates an intuitive way to communicate WRSA results to practitioners using interactive web-based visualization tools developed by the JavaScript library: Data-Driven Documents (D3) with a case study in Great Ruaha River of Tanzania. The decreasing trend of streamflow during the last decades in the region highlights the need of assessing the water usage competition between agricultural production, energy generation, and ecosystem service. Our team conduct the advance water resources systems analysis to inform policy that will affect the water-energy-food nexus. Modeling results are presented in the web-based visualization tools and allow non-technical practitioners to brush the graph directly (e. g. Figure 1). The WRSA suggests that no single measure can completely resolve the water competition. A combination of measures, each of which is acceptable from a social and economic perspective, and accepting that zero flows cannot be totally eliminated during dry years in the wetland, are likely to be the best way forward.
Jørgensen, Katarina M; Haddow, Pauline C
2011-08-01
Simulation tools are playing an increasingly important role behind advances in the field of systems biology. However, the current generation of biological science students has either little or no experience with such tools. As such, this educational glitch is limiting both the potential use of such tools as well as the potential for tighter cooperation between the designers and users. Although some simulation tool producers encourage their use in teaching, little attempt has hitherto been made to analyze and discuss their suitability as an educational tool for noncomputing science students. In general, today's simulation tools assume that the user has a stronger mathematical and computing background than that which is found in most biological science curricula, thus making the introduction of such tools a considerable pedagogical challenge. This paper provides an evaluation of the pedagogical attributes of existing simulation tools for cell signal transduction based on Cognitive Load theory. Further, design recommendations for an improved educational simulation tool are provided. The study is based on simulation tools for cell signal transduction. However, the discussions are relevant to a broader biological simulation tool set.
NASA Astrophysics Data System (ADS)
Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.
2017-12-01
Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as upgrades that we are planning in the near future which will improve its ease of use and reliability and extend its capabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cowlin, S.; Cochran, J.; Cox, S.
Low emission development strategies (LEDS) articulate policies and implementation plans that enable countries to advance sustainable, climate-resilient development and private sector growth while significantly reducing the greenhouse gas (GHG) emissions traditionally associated with economic growth. In creating a LEDS, policy makers often have access to information on abatement potential and costs for clean energy technologies, but there is a scarcity of economy-wide approaches for evaluating and presenting information on other dimensions of importance to development, such as human welfare, poverty alleviation, and energy security. To address this shortcoming, this paper proposes a new tool for communicating development benefits to policymore » makers as part of a LEDS process. The purpose of this tool is two-fold: 1. Communicate development benefits associated with each clean energy-related intervention; 2. Facilitate decision-making on which combination of interventions best contributes to development goals. To pilot this tool, the authors created a visual using data on developmental impacts identified through the Technology Needs Assessment (TNA) project in Montenegro. The visual will then be revised to reflect new data established through the TNA that provides information on cost, GHG mitigation, as well as the range and magnitude of developmental impacts.« less
NASA Astrophysics Data System (ADS)
Morton, A.; Stewart, R.; Held, E.; Piburn, J.; Allen, M. R.; McManamay, R.; Sanyal, J.; Sorokine, A.; Bhaduri, B. L.
2017-12-01
Spatiotemporal (ST) analytics applied to major spatio-temporal data sources from major vendors such as USGS, NOAA, World Bank and World Health Organization have tremendous value in shedding light on the evolution of physical, cultural, and geopolitical landscapes on a local and global level. Especially powerful is the integration of these physical and cultural datasets across multiple and disparate formats, facilitating new interdisciplinary analytics and insights. Realizing this potential first requires an ST data model that addresses challenges in properly merging data from multiple authors, with evolving ontological perspectives, semantical differences, changing attributes, and content that is textual, numeric, categorical, and hierarchical. Equally challenging is the development of analytical and visualization approaches that provide a serious exploration of this integrated data while remaining accessible to practitioners with varied backgrounds. The WSTAMP project at the Oak Ridge National Laboratory has yielded two major results in addressing these challenges: 1) development of the WSTAMP database, a significant advance in ST data modeling that integrates 16000+ attributes covering 200+ countries for over 50 years from over 30 major sources and 2) a novel online ST exploratory and analysis tool providing an array of modern statistical and visualization techniques for analyzing these data temporally, spatially, and spatiotemporally under a standard analytic workflow. We report on these advances, provide an illustrative case study, and inform how others may freely access the tool.
Zaidi, Hasan A; De Los Reyes, Kenneth; Barkhoudarian, Garni; Litvack, Zachary N; Bi, Wenya Linda; Rincon-Torroella, Jordina; Mukundan, Srinivasan; Dunn, Ian F; Laws, Edward R
2016-03-01
Endoscopic skull base surgery has become increasingly popular among the skull base surgery community, with improved illumination and angled visualization potentially improving tumor resection rates. Intraoperative MRI (iMRI) is used to detect residual disease during the course of the resection. This study is an investigation of the utility of 3-T iMRI in combination with transnasal endoscopy with regard to gross-total resection (GTR) of pituitary macroadenomas. The authors retrospectively reviewed all endoscopic transsphenoidal operations performed in the Advanced Multimodality Image Guided Operating (AMIGO) suite from November 2011 to December 2014. Inclusion criteria were patients harboring presumed pituitary macroadenomas with optic nerve or chiasmal compression and visual loss, operated on by a single surgeon. Of the 27 patients who underwent transsphenoidal resection in the AMIGO suite, 20 patients met the inclusion criteria. The endoscope alone, without the use of iMRI, would have correctly predicted extent of resection in 13 (65%) of 20 cases. Gross-total resection was achieved in 12 patients (60%) prior to MRI. Intraoperative MRI helped convert 1 STR and 4 NTRs to GTRs, increasing the number of GTRs from 12 (60%) to 16 (80%). Despite advances in visualization provided by the endoscope, the incidence of residual disease can potentially place the patient at risk for additional surgery. The authors found that iMRI can be useful in detecting unexpected residual tumor. The cost-effectiveness of this tool is yet to be determined.
Methods, Tools and Current Perspectives in Proteogenomics *
Ruggles, Kelly V.; Krug, Karsten; Wang, Xiaojing; Clauser, Karl R.; Wang, Jing; Payne, Samuel H.; Fenyö, David; Zhang, Bing; Mani, D. R.
2017-01-01
With combined technological advancements in high-throughput next-generation sequencing and deep mass spectrometry-based proteomics, proteogenomics, i.e. the integrative analysis of proteomic and genomic data, has emerged as a new research field. Early efforts in the field were focused on improving protein identification using sample-specific genomic and transcriptomic sequencing data. More recently, integrative analysis of quantitative measurements from genomic and proteomic studies have identified novel insights into gene expression regulation, cell signaling, and disease. Many methods and tools have been developed or adapted to enable an array of integrative proteogenomic approaches and in this article, we systematically classify published methods and tools into four major categories, (1) Sequence-centric proteogenomics; (2) Analysis of proteogenomic relationships; (3) Integrative modeling of proteogenomic data; and (4) Data sharing and visualization. We provide a comprehensive review of methods and available tools in each category and highlight their typical applications. PMID:28456751
Avanti lipid tools: connecting lipids, technology, and cell biology.
Sims, Kacee H; Tytler, Ewan M; Tipton, John; Hill, Kasey L; Burgess, Stephen W; Shaw, Walter A
2014-08-01
Lipid research is challenging owing to the complexity and diversity of the lipidome. Here we review a set of experimental tools developed for the seasoned lipid researcher, as well as, those who are new to the field of lipid research. Novel tools for probing protein-lipid interactions, applications for lipid binding antibodies, enhanced systems for the cellular delivery of lipids, improved visualization of lipid membranes using gold-labeled lipids, and advances in mass spectrometric analysis techniques will be discussed. Because lipid mediators are known to participate in a host of signal transduction and trafficking pathways within the cell, a comprehensive lipid toolbox that aids the science of lipidomics research is essential to better understand the molecular mechanisms of interactions between cellular components. This article is part of a Special Issue entitled Tools to study lipid functions. Copyright © 2014. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Yonggang
In implementation of nuclear safeguards, many different techniques are being used to monitor operation of nuclear facilities and safeguard nuclear materials, ranging from radiation detectors, flow monitors, video surveillance, satellite imagers, digital seals to open source search and reports of onsite inspections/verifications. Each technique measures one or more unique properties related to nuclear materials or operation processes. Because these data sets have no or loose correlations, it could be beneficial to analyze the data sets together to improve the effectiveness and efficiency of safeguards processes. Advanced visualization techniques and machine-learning based multi-modality analysis could be effective tools in such integratedmore » analysis. In this project, we will conduct a survey of existing visualization and analysis techniques for multi-source data and assess their potential values in nuclear safeguards.« less
Gutman, David A.; Dunn, William D.; Cobb, Jake; Stoner, Richard M.; Kalpathy-Cramer, Jayashree; Erickson, Bradley
2014-01-01
Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance. PMID:24904399
NASA Astrophysics Data System (ADS)
Godbole, Saurabh
Traditionally, textual tools have been utilized to teach basic programming languages and paradigms. Research has shown that students tend to be visual learners. Using flowcharts, students can quickly understand the logic of their programs and visualize the flow of commands in the algorithm. Moreover, applying programming to physical systems through the use of a microcontroller to facilitate this type of learning can spark an interest in students to advance their programming knowledge to create novel applications. This study examined if freshmen college students' attitudes towards programming changed after completing a graphical programming lesson. Various attributes about students' attitudes were examined including confidence, interest, stereotypes, and their belief in the usefulness of acquiring programming skills. The study found that there were no statistically significant differences in attitudes either immediately following the session or after a period of four weeks.
Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation
Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.
2014-01-01
Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.
ARIES: Enabling Visual Exploration and Organization of Art Image Collections.
Crissaff, Lhaylla; Wood Ruby, Louisa; Deutch, Samantha; DuBois, R Luke; Fekete, Jean-Daniel; Freire, Juliana; Silva, Claudio
2018-01-01
Art historians have traditionally used physical light boxes to prepare exhibits or curate collections. On a light box, they can place slides or printed images, move the images around at will, group them as desired, and visual-ly compare them. The transition to digital images has rendered this workflow obsolete. Now, art historians lack well-designed, unified interactive software tools that effectively support the operations they perform with physi-cal light boxes. To address this problem, we designed ARIES (ARt Image Exploration Space), an interactive image manipulation system that enables the exploration and organization of fine digital art. The system allows images to be compared in multiple ways, offering dynamic overlays analogous to a physical light box, and sup-porting advanced image comparisons and feature-matching functions, available through computational image processing. We demonstrate the effectiveness of our system to support art historians tasks through real use cases.
Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation
Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.
2014-01-01
Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114
Game On, Science - How Video Game Technology May Help Biologists Tackle Visualization Challenges
Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc
2013-01-01
The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/. PMID:23483961
The KUPNetViz: a biological network viewer for multiple -omics datasets in kidney diseases.
Moulos, Panagiotis; Klein, Julie; Jupp, Simon; Stevens, Robert; Bascands, Jean-Loup; Schanstra, Joost P
2013-07-24
Constant technological advances have allowed scientists in biology to migrate from conventional single-omics to multi-omics experimental approaches, challenging bioinformatics to bridge this multi-tiered information. Ongoing research in renal biology is no exception. The results of large-scale and/or high throughput experiments, presenting a wealth of information on kidney disease are scattered across the web. To tackle this problem, we recently presented the KUPKB, a multi-omics data repository for renal diseases. In this article, we describe KUPNetViz, a biological graph exploration tool allowing the exploration of KUPKB data through the visualization of biomolecule interactions. KUPNetViz enables the integration of multi-layered experimental data over different species, renal locations and renal diseases to protein-protein interaction networks and allows association with biological functions, biochemical pathways and other functional elements such as miRNAs. KUPNetViz focuses on the simplicity of its usage and the clarity of resulting networks by reducing and/or automating advanced functionalities present in other biological network visualization packages. In addition, it allows the extrapolation of biomolecule interactions across different species, leading to the formulations of new plausible hypotheses, adequate experiment design and to the suggestion of novel biological mechanisms. We demonstrate the value of KUPNetViz by two usage examples: the integration of calreticulin as a key player in a larger interaction network in renal graft rejection and the novel observation of the strong association of interleukin-6 with polycystic kidney disease. The KUPNetViz is an interactive and flexible biological network visualization and exploration tool. It provides renal biologists with biological network snapshots of the complex integrated data of the KUPKB allowing the formulation of new hypotheses in a user friendly manner.
The KUPNetViz: a biological network viewer for multiple -omics datasets in kidney diseases
2013-01-01
Background Constant technological advances have allowed scientists in biology to migrate from conventional single-omics to multi-omics experimental approaches, challenging bioinformatics to bridge this multi-tiered information. Ongoing research in renal biology is no exception. The results of large-scale and/or high throughput experiments, presenting a wealth of information on kidney disease are scattered across the web. To tackle this problem, we recently presented the KUPKB, a multi-omics data repository for renal diseases. Results In this article, we describe KUPNetViz, a biological graph exploration tool allowing the exploration of KUPKB data through the visualization of biomolecule interactions. KUPNetViz enables the integration of multi-layered experimental data over different species, renal locations and renal diseases to protein-protein interaction networks and allows association with biological functions, biochemical pathways and other functional elements such as miRNAs. KUPNetViz focuses on the simplicity of its usage and the clarity of resulting networks by reducing and/or automating advanced functionalities present in other biological network visualization packages. In addition, it allows the extrapolation of biomolecule interactions across different species, leading to the formulations of new plausible hypotheses, adequate experiment design and to the suggestion of novel biological mechanisms. We demonstrate the value of KUPNetViz by two usage examples: the integration of calreticulin as a key player in a larger interaction network in renal graft rejection and the novel observation of the strong association of interleukin-6 with polycystic kidney disease. Conclusions The KUPNetViz is an interactive and flexible biological network visualization and exploration tool. It provides renal biologists with biological network snapshots of the complex integrated data of the KUPKB allowing the formulation of new hypotheses in a user friendly manner. PMID:23883183
Game on, science - how video game technology may help biologists tackle visualization challenges.
Lv, Zhihan; Tek, Alex; Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc
2013-01-01
The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/.
Pathway Tools version 19.0 update: software for pathway/genome informatics and systems biology
Latendresse, Mario; Paley, Suzanne M.; Krummenacker, Markus; Ong, Quang D.; Billington, Richard; Kothari, Anamika; Weaver, Daniel; Lee, Thomas; Subhraveti, Pallavi; Spaulding, Aaron; Fulcher, Carol; Keseler, Ingrid M.; Caspi, Ron
2016-01-01
Pathway Tools is a bioinformatics software environment with a broad set of capabilities. The software provides genome-informatics tools such as a genome browser, sequence alignments, a genome-variant analyzer and comparative-genomics operations. It offers metabolic-informatics tools, such as metabolic reconstruction, quantitative metabolic modeling, prediction of reaction atom mappings and metabolic route search. Pathway Tools also provides regulatory-informatics tools, such as the ability to represent and visualize a wide range of regulatory interactions. This article outlines the advances in Pathway Tools in the past 5 years. Major additions include components for metabolic modeling, metabolic route search, computation of atom mappings and estimation of compound Gibbs free energies of formation; addition of editors for signaling pathways, for genome sequences and for cellular architecture; storage of gene essentiality data and phenotype data; display of multiple alignments, and of signaling and electron-transport pathways; and development of Python and web-services application programming interfaces. Scientists around the world have created more than 9800 Pathway/Genome Databases by using Pathway Tools, many of which are curated databases for important model organisms. PMID:26454094
Advances in Engineering the Fly Genome with the CRISPR-Cas System
Bier, Ethan; Harrison, Melissa M.; O’Connor-Giles, Kate M.; Wildonger, Jill
2018-01-01
Drosophila has long been a premier model for the development and application of cutting-edge genetic approaches. The CRISPR-Cas system now adds the ability to manipulate the genome with ease and precision, providing a rich toolbox to interrogate relationships between genotype and phenotype, to delineate and visualize how the genome is organized, to illuminate and manipulate RNA, and to pioneer new gene drive technologies. Myriad transformative approaches have already originated from the CRISPR-Cas system, which will likely continue to spark the creation of tools with diverse applications. Here, we provide an overview of how CRISPR-Cas gene editing has revolutionized genetic analysis in Drosophila and highlight key areas for future advances. PMID:29301946
Visualization and Interaction in Research, Teaching, and Scientific Communication
NASA Astrophysics Data System (ADS)
Ammon, C. J.
2017-12-01
Modern computing provides many tools for exploring observations, numerical calculations, and theoretical relationships. The number of options is, in fact, almost overwhelming. But the choices provide those with modest programming skills opportunities to create unique views of scientific information and to develop deeper insights into their data, their computations, and the underlying theoretical data-model relationships. I present simple examples of using animation and human-computer interaction to explore scientific data and scientific-analysis approaches. I illustrate how valuable a little programming ability can free scientists from the constraints of existing tools and can facilitate the development of deeper appreciation data and models. I present examples from a suite of programming languages ranging from C to JavaScript including the Wolfram Language. JavaScript is valuable for sharing tools and insight (hopefully) with others because it is integrated into one of the most powerful communication tools in human history, the web browser. Although too much of that power is often spent on distracting advertisements, the underlying computation and graphics engines are efficient, flexible, and almost universally available in desktop and mobile computing platforms. Many are working to fulfill the browser's potential to become the most effective tool for interactive study. Open-source frameworks for visualizing everything from algorithms to data are available, but advance rapidly. One strategy for dealing with swiftly changing tools is to adopt common, open data formats that are easily adapted (often by framework or tool developers). I illustrate the use of animation and interaction in research and teaching with examples from earthquake seismology.
2017-04-01
ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH...various point cloud visualization techniques for viewing large scale LiDAR datasets. Evaluate their potential use for thick client desktop platforms
Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A
2017-12-19
As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.
From Streaming Data to Streaming Insights: The Impact of Data Velocities on Mental Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Pike, William A.; Cook, Kristin A.
The rise of Big Data has influenced the design and technical implementation of visual analytic tools required to handle the increased volumes, velocities, and varieties of data. This has required a set of data management and computational advancements to allow us to store and compute on such datasets. However, as the ultimate goal of visual analytic technology is to enable the discovery and creation of insights from the users, an under-explored area is understanding how these datasets impact their mental models. That is, how have the analytic processes and strategies of users changed? How have users changed their perception ofmore » how to leverage, and ask questions of, these datasets?« less
Opportunities for CRISPR/Cas9 Gene Editing in Retinal Regeneration Research
Campbell, Leah J.; Hyde, David R.
2017-01-01
While retinal degeneration and disease results in permanent damage and vision loss in humans, the severely damaged zebrafish retina has a high capacity to regenerate lost neurons and restore visual behaviors. Advancements in understanding the molecular and cellular basis of this regeneration response give hope that strategies and therapeutics may be developed to restore sight to blind and visually-impaired individuals. Our current understanding has been facilitated by the amenability of zebrafish to molecular tools, imaging techniques, and forward and reverse genetic approaches. Accordingly, the zebrafish research community has developed a diverse array of research tools for use in developing and adult animals, including toolkits for facilitating the generation of transgenic animals, systems for inducible, cell-specific transgene expression, and the creation of knockout alleles for nearly every protein coding gene. As CRISPR/Cas9 genome editing has begun to revolutionize molecular biology research, the zebrafish community has responded in stride by developing CRISPR/Cas9 techniques for the zebrafish as well as incorporating CRISPR/Cas9 into available toolsets. The application of CRISPR/Cas9 to retinal regeneration research will undoubtedly bring us closer to understanding the mechanisms underlying retinal repair and vision restoration in the zebrafish, as well as developing therapeutic approaches that will restore vision to blind and visually-impaired individuals. This review focuses on how CRISPR/Cas9 has been integrated into zebrafish research toolsets and how this new tool will revolutionize the field of retinal regeneration research. PMID:29218308
Iterating between Tools to Create and Edit Visualizations.
Bigelow, Alex; Drucker, Steven; Fisher, Danyel; Meyer, Miriah
2017-01-01
A common workflow for visualization designers begins with a generative tool, like D3 or Processing, to create the initial visualization; and proceeds to a drawing tool, like Adobe Illustrator or Inkscape, for editing and cleaning. Unfortunately, this is typically a one-way process: once a visualization is exported from the generative tool into a drawing tool, it is difficult to make further, data-driven changes. In this paper, we propose a bridge model to allow designers to bring their work back from the drawing tool to re-edit in the generative tool. Our key insight is to recast this iteration challenge as a merge problem - similar to when two people are editing a document and changes between them need to reconciled. We also present a specific instantiation of this model, a tool called Hanpuku, which bridges between D3 scripts and Illustrator. We show several examples of visualizations that are iteratively created using Hanpuku in order to illustrate the flexibility of the approach. We further describe several hypothetical tools that bridge between other visualization tools to emphasize the generality of the model.
Survey of visualization and analysis tools
NASA Technical Reports Server (NTRS)
Meyer, P. J.
1994-01-01
A large number of commercially available visualization and analysis tools are available to the researcher. Some of the strengths and limitations of some of these tools, from the viewpoint of the earth sciences discipline, are discussed. Visualization and analysis tools fall into one of two categories: those that are designed to a specific purpose and are non-extensive and those that are generic visual programming tools that are extensible. Most of the extensible packages examined incorporate a data flow paradigm.
Developments in the CCP4 molecular-graphics project.
Potterton, Liz; McNicholas, Stuart; Krissinel, Eugene; Gruber, Jan; Cowtan, Kevin; Emsley, Paul; Murshudov, Garib N; Cohen, Serge; Perrakis, Anastassis; Noble, Martin
2004-12-01
Progress towards structure determination that is both high-throughput and high-value is dependent on the development of integrated and automatic tools for electron-density map interpretation and for the analysis of the resulting atomic models. Advances in map-interpretation algorithms are extending the resolution regime in which fully automatic tools can work reliably, but at present human intervention is required to interpret poor regions of macromolecular electron density, particularly where crystallographic data is only available to modest resolution [for example, I/sigma(I) < 2.0 for minimum resolution 2.5 A]. In such cases, a set of manual and semi-manual model-building molecular-graphics tools is needed. At the same time, converting the knowledge encapsulated in a molecular structure into understanding is dependent upon visualization tools, which must be able to communicate that understanding to others by means of both static and dynamic representations. CCP4 mg is a program designed to meet these needs in a way that is closely integrated with the ongoing development of CCP4 as a program suite suitable for both low- and high-intervention computational structural biology. As well as providing a carefully designed user interface to advanced algorithms of model building and analysis, CCP4 mg is intended to present a graphical toolkit to developers of novel algorithms in these fields.
Oishi, Maho; Nakamura, Hajime; Hangai, Masanori; Oishi, Akio; Otani, Atsushi; Yoshimura, Nagahisa
2012-01-01
To assess contrast visual acuity (CVA) in patients with retinitis pigmentosa (RP) and compare the result with standard visual acuity (VA), retinal thickness, status of inner segment/outer segment junction, and central visual field. Thirty-nine eyes of 39 patients with RP and 39 eyes of 39 healthy individuals were studied. To see the difference in CVA between RP patients and normal controls, only subjects with standard VA of 1.0 (20/20) or better were included. This was a cross-sectional study. CVA in various light conditions was measured with CAT-2000 and was compared between patients and controls. CVA of patients was further analyzed for association with other parameters including foveal retinal thickness, outer nuclear layer thickness, the status of inner segment/outer segment junction measured with optical coherence tomography (OCT), and visual field mean deviation (MD) measured with Humphrey field analyzer 10-2 program. CVA impairment was evident in RP patients compared to controls (P < 0.01, in all measurement conditions). Multivariate analysis showed association of logarithm of the minimum angle of resolution (logMAR) with CVAs in several conditions. None of the OCT measurements was associated with CVA. When patients were divided into three groups based on MD, the most advanced group (MD worse than or equal to -20 dB) showed impairment of mesopic CVA (P < 0.05, under mesopic condition of 100% without glare, with glare, and 25% without glare). CVA impairment was confirmed in RP patients, especially in advanced cases. CVA measured with CAT-2000 may be a useful tool for assessing foveal function in RP patients.
Collaborating and sharing data in epilepsy research.
Wagenaar, Joost B; Worrell, Gregory A; Ives, Zachary; Dümpelmann, Matthias; Matthias, Dümpelmann; Litt, Brian; Schulze-Bonhage, Andreas
2015-06-01
Technological advances are dramatically advancing translational research in Epilepsy. Neurophysiology, imaging, and metadata are now recorded digitally in most centers, enabling quantitative analysis. Basic and translational research opportunities to use these data are exploding, but academic and funding cultures prevent this potential from being realized. Research on epileptogenic networks, antiepileptic devices, and biomarkers could progress rapidly if collaborative efforts to digest this "big neuro data" could be organized. Higher temporal and spatial resolution data are driving the need for novel multidimensional visualization and analysis tools. Crowd-sourced science, the same that drives innovation in computer science, could easily be mobilized for these tasks, were it not for competition for funding, attribution, and lack of standard data formats and platforms. As these efforts mature, there is a great opportunity to advance Epilepsy research through data sharing and increase collaboration between the international research community.
Cardiac imaging: working towards fully-automated machine analysis & interpretation
Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido
2017-01-01
Introduction Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation. PMID:28277804
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keefer, Donald A.; Shaffer, Eric G.; Storsved, Brynne
A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing jointmore » visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less
RVA: A Plugin for ParaView 3.14
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-04
RVA is a plugin developed for the 64-bit Windows version of the ParaView 3.14 visualization package. RVA is designed to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed onmore » enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less
Zhao, Shanrong; Xi, Li; Quan, Jie; Xi, Hualin; Zhang, Ying; von Schack, David; Vincent, Michael; Zhang, Baohong
2016-01-08
RNA sequencing (RNA-seq), a next-generation sequencing technique for transcriptome profiling, is being increasingly used, in part driven by the decreasing cost of sequencing. Nevertheless, the analysis of the massive amounts of data generated by large-scale RNA-seq remains a challenge. Multiple algorithms pertinent to basic analyses have been developed, and there is an increasing need to automate the use of these tools so as to obtain results in an efficient and user friendly manner. Increased automation and improved visualization of the results will help make the results and findings of the analyses readily available to experimental scientists. By combing the best open source tools developed for RNA-seq data analyses and the most advanced web 2.0 technologies, we have implemented QuickRNASeq, a pipeline for large-scale RNA-seq data analyses and visualization. The QuickRNASeq workflow consists of three main steps. In Step #1, each individual sample is processed, including mapping RNA-seq reads to a reference genome, counting the numbers of mapped reads, quality control of the aligned reads, and SNP (single nucleotide polymorphism) calling. Step #1 is computationally intensive, and can be processed in parallel. In Step #2, the results from individual samples are merged, and an integrated and interactive project report is generated. All analyses results in the report are accessible via a single HTML entry webpage. Step #3 is the data interpretation and presentation step. The rich visualization features implemented here allow end users to interactively explore the results of RNA-seq data analyses, and to gain more insights into RNA-seq datasets. In addition, we used a real world dataset to demonstrate the simplicity and efficiency of QuickRNASeq in RNA-seq data analyses and interactive visualizations. The seamless integration of automated capabilites with interactive visualizations in QuickRNASeq is not available in other published RNA-seq pipelines. The high degree of automation and interactivity in QuickRNASeq leads to a substantial reduction in the time and effort required prior to further downstream analyses and interpretation of the analyses findings. QuickRNASeq advances primary RNA-seq data analyses to the next level of automation, and is mature for public release and adoption.
The BOEING 777 - concurrent engineering and digital pre-assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abarbanel, B.
The processes created on the 777 for checking designs were called {open_quotes}digital pre-assembly{close_quotes}. Using FlyThru(tm), a spin-off of a Boeing advanced computing research project, engineers were able to view up to 1500 models (15000 solids) in 3d traversing that data at high speed. FlyThru(tm) was rapidly deployed in 1991 to meet the needs of the 777 for large scale product visualization and verification. The digital pre-assembly process has bad fantastic results. The 777 has had far fewer assembly and systems problems compared to previous airplane programs. Today, FlyThru(tm) is installed on hundreds of workstations on almost every airplane program, andmore » is being used on Space Station, F22, AWACS, and other defense projects. It`s applications have gone far beyond just design review. In many ways, FlyThru is a Data Warehouse supported by advanced tools for analysis. It is today being integrated with Knowledge Based Engineering geometry generation tools.« less
Collaboration tools and techniques for large model datasets
Signell, R.P.; Carniel, S.; Chiggiato, J.; Janekovic, I.; Pullen, J.; Sherwood, C.R.
2008-01-01
In MREA and many other marine applications, it is common to have multiple models running with different grids, run by different institutions. Techniques and tools are described for low-bandwidth delivery of data from large multidimensional datasets, such as those from meteorological and oceanographic models, directly into generic analysis and visualization tools. Output is stored using the NetCDF CF Metadata Conventions, and then delivered to collaborators over the web via OPeNDAP. OPeNDAP datasets served by different institutions are then organized via THREDDS catalogs. Tools and procedures are then used which enable scientists to explore data on the original model grids using tools they are familiar with. It is also low-bandwidth, enabling users to extract just the data they require, an important feature for access from ship or remote areas. The entire implementation is simple enough to be handled by modelers working with their webmasters - no advanced programming support is necessary. ?? 2007 Elsevier B.V. All rights reserved.
Human-system interfaces for space cognitive awareness
NASA Astrophysics Data System (ADS)
Ianni, J.
Space situational awareness is a human activity. We have advanced sensors and automation capabilities but these continue to be tools for humans to use. The reality is, however, that humans cannot take full advantage of the power of these tools due to time constraints, cognitive limitations, poor tool integration, poor human-system interfaces, and other reasons. Some excellent tools may never be used in operations and, even if they were, they may not be well suited to provide a cohesive and comprehensive picture. Recognizing this, the Air Force Research Laboratory (AFRL) is applying cognitive science principles to increase the knowledge derived from existing tools and creating new capabilities to help space analysts and decision makers. At the center of this research is Sensemaking Support Environment technology. The concept is to create cognitive-friendly computer environments that connect critical and creative thinking for holistic decision making. AFRL is also investigating new visualization technologies for multi-sensor exploitation and space weather, human-to-human collaboration technologies, and other technology that will be discussed in this paper.
Overview of codes and tools for nuclear engineering education
NASA Astrophysics Data System (ADS)
Yakovlev, D.; Pryakhin, A.; Medvedeva, L.
2017-01-01
The recent world trends in nuclear education have been developed in the direction of social education, networking, virtual tools and codes. MEPhI as a global leader on the world education market implements new advanced technologies for the distance and online learning and for student research work. MEPhI produced special codes, tools and web resources based on the internet platform to support education in the field of nuclear technology. At the same time, MEPhI actively uses codes and tools from the third parties. Several types of the tools are considered: calculation codes, nuclear data visualization tools, virtual labs, PC-based educational simulators for nuclear power plants (NPP), CLP4NET, education web-platforms, distance courses (MOOCs and controlled and managed content systems). The university pays special attention to integrated products such as CLP4NET, which is not a learning course, but serves to automate the process of learning through distance technologies. CLP4NET organizes all tools in the same information space. Up to now, MEPhI has achieved significant results in the field of distance education and online system implementation.
SECIMTools: a suite of metabolomics data analysis tools.
Kirpich, Alexander S; Ibarra, Miguel; Moskalenko, Oleksandr; Fear, Justin M; Gerken, Joseph; Mi, Xinlei; Ashrafi, Ali; Morse, Alison M; McIntyre, Lauren M
2018-04-20
Metabolomics has the promise to transform the area of personalized medicine with the rapid development of high throughput technology for untargeted analysis of metabolites. Open access, easy to use, analytic tools that are broadly accessible to the biological community need to be developed. While technology used in metabolomics varies, most metabolomics studies have a set of features identified. Galaxy is an open access platform that enables scientists at all levels to interact with big data. Galaxy promotes reproducibility by saving histories and enabling the sharing workflows among scientists. SECIMTools (SouthEast Center for Integrated Metabolomics) is a set of Python applications that are available both as standalone tools and wrapped for use in Galaxy. The suite includes a comprehensive set of quality control metrics (retention time window evaluation and various peak evaluation tools), visualization techniques (hierarchical cluster heatmap, principal component analysis, modular modularity clustering), basic statistical analysis methods (partial least squares - discriminant analysis, analysis of variance, t-test, Kruskal-Wallis non-parametric test), advanced classification methods (random forest, support vector machines), and advanced variable selection tools (least absolute shrinkage and selection operator LASSO and Elastic Net). SECIMTools leverages the Galaxy platform and enables integrated workflows for metabolomics data analysis made from building blocks designed for easy use and interpretability. Standard data formats and a set of utilities allow arbitrary linkages between tools to encourage novel workflow designs. The Galaxy framework enables future data integration for metabolomics studies with other omics data.
URPD: a specific product primer design tool
2012-01-01
Background Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. Findings URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. Conclusions URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/. PMID:22713312
URPD: a specific product primer design tool.
Chuang, Li-Yeh; Cheng, Yu-Huei; Yang, Cheng-Hong
2012-06-19
Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/.
Nebhydro: Sharing Geospatial Data to Supportwater Management in Nebraska
NASA Astrophysics Data System (ADS)
Kamble, B.; Irmak, A.; Hubbard, K.; Deogun, J.; Dvorak, B.
2012-12-01
Recent advances in web-enabled geographical technologies have the potential to make a dramatic impact on development of highly interactive spatial applications on the web for visualization of large-scale geospatial data by water resources and irrigation scientists. Spatial and point scale water resources data visualization are an emerging and challenging application domain. Query based visual explorations of geospatial hydrological data can play an important role in stimulating scientific hypotheses and seeking causal relationships among hydro variables. The Nebraska Hydrological Information System (NebHydro) utilizes ESRI's ArcGIS server technology to increase technological awareness among farmers, irrigation managers and policy makers. Web-based geospatial applications are an effective way to expose scientific hydrological datasets to the research community and the public. NebHydro uses Adobe Flex technology to offer an online visualization and data analysis system for presentation of social and economic data. Internet mapping services is an integrated product of GIS and Internet technologies; it is a favored solution to achieve the interoperability of GIS. The development of Internet based GIS services in the state of Nebraska showcases the benefits of sharing geospatial hydrological data among agencies, resource managers and policy makers. Geospatial hydrological Information (Evapotranspiration from Remote Sensing, vegetation indices (NDVI), USGS Stream gauge data, Climatic data etc.) is generally generated through model simulation (METRIC, SWAP, Linux, Python based scripting etc). Information is compiled into and stored within object oriented relational spatial databases using a geodatabase information model that supports the key data types needed by applications including features, relationships, networks, imagery, terrains, maps and layers. The system provides online access, querying, visualization, and analysis of the hydrological data from several sources at one place. The study indicates that internet GIS, developed using advanced technologies, provides valuable education potential to users in hydrology and irrigation engineering and suggests that such a system can support advanced hydrological data access and analysis tools to improve utility of data in operations. Keywords: Hydrological Information System, NebHydro, Water Management, data sharing, data visualization, ArcGIS server.
Contingency Analysis Post-Processing With Advanced Computing and Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin
Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability andmore » accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.« less
Network portal: a database for storage, analysis and visualization of biological networks
Turkarslan, Serdar; Wurtmann, Elisabeth J.; Wu, Wei-Ju; Jiang, Ning; Bare, J. Christopher; Foley, Karen; Reiss, David J.; Novichkov, Pavel; Baliga, Nitin S.
2014-01-01
The ease of generating high-throughput data has enabled investigations into organismal complexity at the systems level through the inference of networks of interactions among the various cellular components (genes, RNAs, proteins and metabolites). The wider scientific community, however, currently has limited access to tools for network inference, visualization and analysis because these tasks often require advanced computational knowledge and expensive computing resources. We have designed the network portal (http://networks.systemsbiology.net) to serve as a modular database for the integration of user uploaded and public data, with inference algorithms and tools for the storage, visualization and analysis of biological networks. The portal is fully integrated into the Gaggle framework to seamlessly exchange data with desktop and web applications and to allow the user to create, save and modify workspaces, and it includes social networking capabilities for collaborative projects. While the current release of the database contains networks for 13 prokaryotic organisms from diverse phylogenetic clades (4678 co-regulated gene modules, 3466 regulators and 9291 cis-regulatory motifs), it will be rapidly populated with prokaryotic and eukaryotic organisms as relevant data become available in public repositories and through user input. The modular architecture, simple data formats and open API support community development of the portal. PMID:24271392
NASA Technical Reports Server (NTRS)
Lammers, Matt
2017-01-01
Geospatial weather visualization remains predominately a two-dimensional endeavor. Even popular advanced tools like the Nullschool Earth display 2-dimensional fields on a 3-dimensional globe. Yet much of the observational data and model output contains detailed three-dimensional fields. In 2014, NASA and JAXA (Japanese Space Agency) launched the Global Precipitation Measurement (GPM) satellite. Its two instruments, the Dual-frequency Precipitation Radar (DPR) and GPM Microwave Imager (GMI) observe much of the Earth's atmosphere between 65 degrees North Latitude and 65 degrees South Latitude. As part of the analysis and visualization tools developed by the Precipitation Processing System (PPS) Group at NASA Goddard, a series of CesiumJS [Using Cesium Markup Language (CZML), JavaScript (JS) and JavaScript Object Notation (JSON)] -based globe viewers have been developed to improve data acquisition decision making and to enhance scientific investigation of the satellite data. Other demos have also been built to illustrate the capabilities of CesiumJS in presenting atmospheric data, including model forecasts of hurricanes, observed surface radar data, and gridded analyses of global precipitation. This talk will present these websites and the various workflows used to convert binary satellite and model data into a form easily integrated with CesiumJS.
Investigation of roughing machining simulation by using visual basic programming in NX CAM system
NASA Astrophysics Data System (ADS)
Hafiz Mohamad, Mohamad; Nafis Osman Zahid, Muhammed
2018-03-01
This paper outlines a simulation study to investigate the characteristic of roughing machining simulation in 4th axis milling processes by utilizing visual basic programming in NX CAM systems. The selection and optimization of cutting orientation in rough milling operation is critical in 4th axis machining. The main purpose of roughing operation is to approximately shape the machined parts into finished form by removing the bulk of material from workpieces. In this paper, the simulations are executed by manipulating a set of different cutting orientation to generate estimated volume removed from the machine parts. The cutting orientation with high volume removal is denoted as an optimum value and chosen to execute a roughing operation. In order to run the simulation, customized software is developed to assist the routines. Operations build-up instructions in NX CAM interface are translated into programming codes via advanced tool available in the Visual Basic Studio. The codes is customized and equipped with decision making tools to run and control the simulations. It permits the integration with any independent program files to execute specific operations. This paper aims to discuss about the simulation program and identifies optimum cutting orientations for roughing processes. The output of this study will broaden up the simulation routines performed in NX CAM systems.
Trapnell, Cole; Roberts, Adam; Goff, Loyal; Pertea, Geo; Kim, Daehwan; Kelley, David R; Pimentel, Harold; Salzberg, Steven L; Rinn, John L; Pachter, Lior
2012-01-01
Recent advances in high-throughput cDNA sequencing (RNA-seq) can reveal new genes and splice variants and quantify expression genome-wide in a single assay. The volume and complexity of data from RNA-seq experiments necessitate scalable, fast and mathematically principled analysis software. TopHat and Cufflinks are free, open-source software tools for gene discovery and comprehensive expression analysis of high-throughput mRNA sequencing (RNA-seq) data. Together, they allow biologists to identify new genes and new splice variants of known ones, as well as compare gene and transcript expression under two or more conditions. This protocol describes in detail how to use TopHat and Cufflinks to perform such analyses. It also covers several accessory tools and utilities that aid in managing data, including CummeRbund, a tool for visualizing RNA-seq analysis results. Although the procedure assumes basic informatics skills, these tools assume little to no background with RNA-seq analysis and are meant for novices and experts alike. The protocol begins with raw sequencing reads and produces a transcriptome assembly, lists of differentially expressed and regulated genes and transcripts, and publication-quality visualizations of analysis results. The protocol's execution time depends on the volume of transcriptome sequencing data and available computing resources but takes less than 1 d of computer time for typical experiments and ~1 h of hands-on time. PMID:22383036
Integrated Data Visualization and Virtual Reality Tool
NASA Technical Reports Server (NTRS)
Dryer, David A.
1998-01-01
The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.
The Development of a Visual-Perceptual Chemistry Specific (VPCS) Assessment Tool
ERIC Educational Resources Information Center
Oliver-Hoyo, Maria; Sloan, Caroline
2014-01-01
The development of the Visual-Perceptual Chemistry Specific (VPCS) assessment tool is based on items that align to eight visual-perceptual skills considered as needed by chemistry students. This tool includes a comprehensive range of visual operations and presents items within a chemistry context without requiring content knowledge to solve…
3D Slicer as an Image Computing Platform for the Quantitative Imaging Network
Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron
2012-01-01
Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690
Advances in the production of freeform optical surfaces
NASA Astrophysics Data System (ADS)
Tohme, Yazid E.; Luniya, Suneet S.
2007-05-01
Recent market demands for free-form optics have challenged the industry to find new methods and techniques to manufacture free-form optical surfaces with a high level of accuracy and reliability. Production techniques are becoming a mix of multi-axis single point diamond machining centers or deterministic ultra precision grinding centers coupled with capable measurement systems to accomplish the task. It has been determined that a complex software tool is required to seamlessly integrate all aspects of the manufacturing process chain. Advances in computational power and improved performance of computer controlled precision machinery have driven the use of such software programs to measure, visualize, analyze, produce and re-validate the 3D free-form design thus making the process of manufacturing such complex surfaces a viable task. Consolidation of the entire production cycle in a comprehensive software tool that can interact with all systems in design, production and measurement phase will enable manufacturers to solve these complex challenges providing improved product quality, simplified processes, and enhanced performance. The work being presented describes the latest advancements in developing such software package for the entire fabrication process chain for aspheric and free-form shapes. It applies a rational B-spline based kernel to transform an optical design in the form of parametrical definition (optical equation), standard CAD format, or a cloud of points to a central format that drives the simulation. This software tool creates a closed loop for the fabrication process chain. It integrates surface analysis and compensation, tool path generation, and measurement analysis in one package.
NASA Astrophysics Data System (ADS)
Piburn, J.; Stewart, R.; Myers, A.; Sorokine, A.; Axley, E.; Anderson, D.; Burdette, J.; Biddle, C.; Hohl, A.; Eberle, R.; Kaufman, J.; Morton, A.
2017-10-01
Spatiotemporal (ST) analytics applied to major data sources such as the World Bank and World Health Organization has shown tremendous value in shedding light on the evolution of cultural, health, economic, and geopolitical landscapes on a global level. WSTAMP engages this opportunity by situating analysts, data, and analytics together within a visually rich and computationally rigorous online analysis environment. Since introducing WSTAMP at the First International Workshop on Spatiotemporal Computing, several transformative advances have occurred. Collaboration with human computer interaction experts led to a complete interface redesign that deeply immerses the analyst within a ST context, significantly increases visual and textual content, provides navigational crosswalks for attribute discovery, substantially reduce mouse and keyboard actions, and supports user data uploads. Secondly, the database has been expanded to include over 16,000 attributes, 50 years of time, and 200+ nation states and redesigned to support non-annual, non-national, city, and interaction data. Finally, two new analytics are implemented for analyzing large portfolios of multi-attribute data and measuring the behavioral stability of regions along different dimensions. These advances required substantial new approaches in design, algorithmic innovations, and increased computational efficiency. We report on these advances and inform how others may freely access the tool.
Introducing GHOST: The Geospace/Heliosphere Observation & Simulation Tool-kit
NASA Astrophysics Data System (ADS)
Murphy, J. J.; Elkington, S. R.; Schmitt, P.; Wiltberger, M. J.; Baker, D. N.
2013-12-01
Simulation models of the heliospheric and geospace environments can provide key insights into the geoeffective potential of solar disturbances such as Coronal Mass Ejections and High Speed Solar Wind Streams. Advanced post processing of the results of these simulations greatly enhances the utility of these models for scientists and other researchers. Currently, no supported centralized tool exists for performing these processing tasks. With GHOST, we introduce a toolkit for the ParaView visualization environment that provides a centralized suite of tools suited for Space Physics post processing. Building on the work from the Center For Integrated Space Weather Modeling (CISM) Knowledge Transfer group, GHOST is an open-source tool suite for ParaView. The tool-kit plugin currently provides tools for reading LFM and Enlil data sets, and provides automated tools for data comparison with NASA's CDAweb database. As work progresses, many additional tools will be added and through open-source collaboration, we hope to add readers for additional model types, as well as any additional tools deemed necessary by the scientific public. The ultimate end goal of this work is to provide a complete Sun-to-Earth model analysis toolset.
NASA Technical Reports Server (NTRS)
Saha, Dipanjan; Lewandowski, Edward J.
2013-01-01
The steady-state, nearly sinusoidal behavior of the components in a free-piston Stirling engine allows for visualization of the forces in the system using phasor diagrams. Based on Newton's second law, F = ma, any phasor diagrams modeling a given component in a system should close if all of the acting forces have been considered. Since the Advanced Stirling Radioisotope Generator (ASRG), currently being developed for future NASA deep space missions, is made up of such nearly sinusoidally oscillating components, its phasor diagrams would also be expected to close. A graphical user interface (GUI) has been written in MATLAB (MathWorks), which takes user input data, passes it to Sage (Gedeon Associates), a one-dimensional thermodynamic modeling program used to model the Stirling convertor, runs Sage, and then automatically plots the phasor diagrams. Using this software tool, the effect of varying different Sage inputs on the phasor diagrams was determined. The parameters varied were piston amplitude, hot-end temperature, cold-end temperature, operating frequency, and displacer spring constant. These phasor diagrams offer useful insight into convertor operation and performance.
NASA Astrophysics Data System (ADS)
Garcia Fernandez, J.; Tammi, K.; Joutsiniemi, A.
2017-02-01
Recent advances in Terrestrial Laser Scanner (TLS), in terms of cost and flexibility, have consolidated this technology as an essential tool for the documentation and digitalization of Cultural Heritage. However, once the TLS data is used, it basically remains stored and left to waste.How can highly accurate and dense point clouds (of the built heritage) be processed for its reuse, especially to engage a broader audience? This paper aims to answer this question by a channel that minimizes the need for expert knowledge, while enhancing the interactivity with the as-built digital data: Virtual Heritage Dissemination through the production of VR content. Driven by the ProDigiOUs project's guidelines on data dissemination (EU funded), this paper advances in a production path to transform the point cloud into virtual stereoscopic spherical images, taking into account the different visual features that produce depth perception, and especially those prompting visual fatigue while experiencing the VR content. Finally, we present the results of the Hiedanranta's scans transformed into stereoscopic spherical animations.
Advanced Query and Data Mining Capabilities for MaROS
NASA Technical Reports Server (NTRS)
Wang, Paul; Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Hy, Franklin H.
2013-01-01
The Mars Relay Operational Service (MaROS) comprises a number of tools to coordinate, plan, and visualize various aspects of the Mars Relay network. These levels include a Web-based user interface, a back-end "ReSTlet" built in Java, and databases that store the data as it is received from the network. As part of MaROS, the innovators have developed and implemented a feature set that operates on several levels of the software architecture. This new feature is an advanced querying capability through either the Web-based user interface, or through a back-end REST interface to access all of the data gathered from the network. This software is not meant to replace the REST interface, but to augment and expand the range of available data. The current REST interface provides specific data that is used by the MaROS Web application to display and visualize the information; however, the returned information from the REST interface has typically been pre-processed to return only a subset of the entire information within the repository, particularly only the information that is of interest to the GUI (graphical user interface). The new, advanced query and data mining capabilities allow users to retrieve the raw data and/or to perform their own data processing. The query language used to access the repository is a restricted subset of the structured query language (SQL) that can be built safely from the Web user interface, or entered as freeform SQL by a user. The results are returned in a CSV (Comma Separated Values) format for easy exporting to third party tools and applications that can be used for data mining or user-defined visualization and interpretation. This is the first time that a service is capable of providing access to all cross-project relay data from a single Web resource. Because MaROS contains the data for a variety of missions from the Mars network, which span both NASA and ESA, the software also establishes an access control list (ACL) on each data record in the database repository to enforce user access permissions through a multilayered approach.
Visualizing UAS-collected imagery using augmented reality
NASA Astrophysics Data System (ADS)
Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.
2017-05-01
One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.
DOT National Transportation Integrated Search
2012-06-01
The use of visual simulation tools to convey complex concepts has become a useful tool in education as well as in research. : This report describes a project that developed curriculum and visualization tools to train transportation engineering studen...
NASA Technical Reports Server (NTRS)
Smith, Jeffrey
2003-01-01
The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.
Iowa Flood Information System: Towards Integrated Data Management, Analysis and Visualization
NASA Astrophysics Data System (ADS)
Demir, I.; Krajewski, W. F.; Goska, R.; Mantilla, R.; Weber, L. J.; Young, N.
2012-04-01
The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 500 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview and live demonstration of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.
Communications Effects Server (CES) Model for Systems Engineering Research
2012-01-31
Visualization Tool Interface «logical» HLA Tool Interface «logical» DIS Tool Interface «logical» STK Tool Interface «module» Execution Kernels «logical...interoperate with STK when running simulations. GUI Components Architect – The Architect represents the main network design and visualization ...interest» CES «block» Third Party Visualization Tool «block» Third Party Analysis Tool «block» Third Party Text Editor «block» HLA Tools Analyst User Army
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, Kate; Field, Patrick; Fierman, Elizabeth
The project team consisting of the Consensus Building Institute, Inc., Raab Associates, Ltd., and the MIT-Harvard Program on Negotiation created a model and set of tools for building the capacity of state officials to effectively collaborate with diverse stakeholders in advancing wind development policy formation, wind facility siting, and transmission policy and siting. The model was used to enhance the ability of state officials to advance wind development in their states. Training was delivered in Cambridge, MA, in Spring 2011. The training and associated materials, including a Wind Energy Workbook, website, and simulations, is available for ongoing and widespread disseminationmore » throughout the US.« less
Allison, J.; Amako, K.; Apostolakis, J.; ...
2016-07-01
Geant4 is a software toolkit for the simulation of the passage of particles through matter. It is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection. Over the past several years, major changes have been made to the toolkit in order to accommodate the needs of these user communities, and to efficiently exploit the growth of computing power made available by advances in technology. In conclusion, the adaptation of Geant4 to multithreading, advances in physics, detector modeling and visualization, extensions tomore » the toolkit, including biasing and reverse Monte Carlo, and tools for physics and release validation are discussed here.« less
Zaidi, Hasan A.; De Los Reyes, Kenneth; Barkhoudarian, Garni; Litvack, Zachary N.; Bi, Wenya Linda; Rincon-Torroella, Jordina; Mukundan, Srinivasan; Dunn, Ian F.; Laws, Edward R.
2016-01-01
Objective Endoscopic skull base surgery has become increasingly popular among the skull base surgery community, with improved illumination and angled visualization potentially improving tumor resection rates. Intraoperative MRI (iMRI) is used to detect residual disease during the course of the resection. This study is an investigation of the utility of 3-T iMRI in combination with transnasal endoscopy with regard to gross-total resection (GTR) of pituitary macroadenomas. Methods The authors retrospectively reviewed all endoscopic transsphenoidal operations performed in the Advanced Multimodality Image Guided Operating (AMIGO) suite from November 2011 to December 2014. Inclusion criteria were patients harboring presumed pituitary macroadenomas with optic nerve or chiasmal compression and visual loss, operated on by a single surgeon. Results Of the 27 patients who underwent transsphenoidal resection in the AMIGO suite, 20 patients met the inclusion criteria. The endoscope alone, without the use of iMRI, would have correctly predicted 13 (65%) of 20 cases. Gross-total resection was achieved in 12 patients (60%) prior to MRI. Intraoperative MRI helped convert 1 STR and 4 NTRs to GTRs, increasing the number of GTRs from 12 (60%) to 16 (80%). Conclusions Despite advances in visualization provided by the endoscope, the incidence of residual disease can potentially place the patient at risk for additional surgery. The authors found that iMRI can be useful in detecting unexpected residual tumor. The cost-effectiveness of this tool is yet to be determined. PMID:26926058
Cognitive Virtualization: Combining Cognitive Models and Virtual Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuan Q. Tran; David I. Gertman; Donald D. Dudenhoeffer
2007-08-01
3D manikins are often used in visualizations to model human activity in complex settings. Manikins assist in developing understanding of human actions, movements and routines in a variety of different environments representing new conceptual designs. One such environment is a nuclear power plant control room, here they have the potential to be used to simulate more precise ergonomic assessments of human work stations. Next generation control rooms will pose numerous challenges for system designers. The manikin modeling approach by itself, however, may be insufficient for dealing with the desired technical advancements and challenges of next generation automated systems. Uncertainty regardingmore » effective staffing levels; and the potential for negative human performance consequences in the presence of advanced automated systems (e.g., reduced vigilance, poor situation awareness, mistrust or blind faith in automation, higher information load and increased complexity) call for further research. Baseline assessment of novel control room equipment(s) and configurations needs to be conducted. These design uncertainties can be reduced through complementary analysis that merges ergonomic manikin models with models of higher cognitive functions, such as attention, memory, decision-making, and problem-solving. This paper will discuss recent advancements in merging a theoretical-driven cognitive modeling framework within a 3D visualization modeling tool to evaluate of next generation control room human factors and ergonomic assessment. Though this discussion primary focuses on control room design, the application for such a merger between 3D visualization and cognitive modeling can be extended to various areas of focus such as training and scenario planning.« less
National Fusion Collaboratory: Grid Computing for Simulations and Experiments
NASA Astrophysics Data System (ADS)
Greenwald, Martin
2004-05-01
The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.
PyPathway: Python Package for Biological Network Analysis and Visualization.
Xu, Yang; Luo, Xiao-Chun
2018-05-01
Life science studies represent one of the biggest generators of large data sets, mainly because of rapid sequencing technological advances. Biological networks including interactive networks and human curated pathways are essential to understand these high-throughput data sets. Biological network analysis offers a method to explore systematically not only the molecular complexity of a particular disease but also the molecular relationships among apparently distinct phenotypes. Currently, several packages for Python community have been developed, such as BioPython and Goatools. However, tools to perform comprehensive network analysis and visualization are still needed. Here, we have developed PyPathway, an extensible free and open source Python package for functional enrichment analysis, network modeling, and network visualization. The network process module supports various interaction network and pathway databases such as Reactome, WikiPathway, STRING, and BioGRID. The network analysis module implements overrepresentation analysis, gene set enrichment analysis, network-based enrichment, and de novo network modeling. Finally, the visualization and data publishing modules enable users to share their analysis by using an easy web application. For package availability, see the first Reference.
Nanavaty, Joanne
2018-03-01
This qualitative design study addressed the enhancement of nursing assessment skills through the use of Visual Thinking Strategies and reflection. This study advances understanding of the use of Visual Thinking Strategies and reflection as ways to explore new methods of thinking and observing patient situations relating to health care. Sixty nursing students in a licensed practical nursing program made up the sample of participants who attended an art gallery as part of a class assignment. Participants replied to a survey of interest for participation at the art gallery. Participants reviewed artwork at the gallery and shared observations with the larger group during a post-conference session in a gathering area of the museum at the end of the visit. A reflective exercise on the art gallery experience exhibited further thoughts about the art gallery experience and demonstrated the connections made to clinical practice by the student. The findings of this study support the use of Visual Thinking Strategies and reflection as effective teaching and learning tools for enhancing nursing skills. Copyright © 2017 Elsevier Ltd. All rights reserved.
Faiola, Anthony; Srinivas, Preethi; Duke, Jon
2015-01-01
Advances in intensive care unit bedside displays/interfaces and electronic medical record (EMR) technology have not adequately addressed the topic of visual clarity of patient data/information to further reduce cognitive load during clinical decision-making. We responded to these challenges with a human-centered approach to designing and testing a decision-support tool: MIVA 2.0 (Medical Information Visualization Assistant, v.2). Envisioned as an EMR visualization dashboard to support rapid analysis of real-time clinical data-trends, our primary goal originated from a clinical requirement to reduce cognitive overload. In the study, a convenience sample of 12 participants were recruited, in which quantitative and qualitative measures were used to compare MIVA 2.0 with ICU paper medical-charts, using time-on-task, post-test questionnaires, and interviews. Findings demonstrated a significant difference in speed and accuracy with the use of MIVA 2.0. Qualitative outcomes concurred, with participants acknowledging the potential impact of MIVA 2.0 for reducing cognitive load and enabling more accurate and quicker decision-making.
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
Advanced Capabilities for Wind Tunnel Testing in the 21st Century
NASA Technical Reports Server (NTRS)
Kegelman, Jerome T.; Danehy, Paul M.; Schwartz, Richard J.
2010-01-01
Wind tunnel testing methods and test technologies for the 21st century using advanced capabilities are presented. These capabilities are necessary to capture more accurate and high quality test results by eliminating the uncertainties in testing and to facilitate verification of computational tools for design. This paper discusses near term developments underway in ground testing capabilities, which will enhance the quality of information of both the test article and airstream flow details. Also discussed is a selection of new capability investments that have been made to accommodate such developments. Examples include advanced experimental methods for measuring the test gas itself; using efficient experiment methodologies, including quality assurance strategies within the test; and increasing test result information density by using extensive optical visualization together with computed flow field results. These points could be made for both major investments in existing tunnel capabilities or for entirely new capabilities.
EarthCube: A Community-Driven Cyberinfrastructure for the Geosciences
NASA Astrophysics Data System (ADS)
Koskela, Rebecca; Ramamurthy, Mohan; Pearlman, Jay; Lehnert, Kerstin; Ahern, Tim; Fredericks, Janet; Goring, Simon; Peckham, Scott; Powers, Lindsay; Kamalabdi, Farzad; Rubin, Ken; Yarmey, Lynn
2017-04-01
EarthCube is creating a dynamic, System of Systems (SoS) infrastructure and data tools to collect, access, analyze, share, and visualize all forms of geoscience data and resources, using advanced collaboration, technological, and computational capabilities. EarthCube, as a joint effort between the U.S. National Science Foundation Directorate for Geosciences and the Division of Advanced Cyberinfrastructure, is a quickly growing community of scientists across all geoscience domains, as well as geoinformatics researchers and data scientists. EarthCube has attracted an evolving, dynamic virtual community of more than 2,500 contributors, including earth, ocean, polar, planetary, atmospheric, geospace, computer and social scientists, educators, and data and information professionals. During 2017, EarthCube will transition to the implementation phase. The implementation will balance "innovation" and "production" to advance cross-disciplinary science goals as well as the development of future data scientists. This presentation will describe the current architecture design for the EarthCube cyberinfrastructure and implementation plan.
The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study
Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.
2008-01-01
Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150
Recent advances in quantitative high throughput and high content data analysis.
Moutsatsos, Ioannis K; Parker, Christian N
2016-01-01
High throughput screening has become a basic technique with which to explore biological systems. Advances in technology, including increased screening capacity, as well as methods that generate multiparametric readouts, are driving the need for improvements in the analysis of data sets derived from such screens. This article covers the recent advances in the analysis of high throughput screening data sets from arrayed samples, as well as the recent advances in the analysis of cell-by-cell data sets derived from image or flow cytometry application. Screening multiple genomic reagents targeting any given gene creates additional challenges and so methods that prioritize individual gene targets have been developed. The article reviews many of the open source data analysis methods that are now available and which are helping to define a consensus on the best practices to use when analyzing screening data. As data sets become larger, and more complex, the need for easily accessible data analysis tools will continue to grow. The presentation of such complex data sets, to facilitate quality control monitoring and interpretation of the results will require the development of novel visualizations. In addition, advanced statistical and machine learning algorithms that can help identify patterns, correlations and the best features in massive data sets will be required. The ease of use for these tools will be important, as they will need to be used iteratively by laboratory scientists to improve the outcomes of complex analyses.
Volumetric visualization of 3D data
NASA Technical Reports Server (NTRS)
Russell, Gregory; Miles, Richard
1989-01-01
In recent years, there has been a rapid growth in the ability to obtain detailed data on large complex structures in three dimensions. This development occurred first in the medical field, with CAT (computer aided tomography) scans and now magnetic resonance imaging, and in seismological exploration. With the advances in supercomputing and computational fluid dynamics, and in experimental techniques in fluid dynamics, there is now the ability to produce similar large data fields representing 3D structures and phenomena in these disciplines. These developments have produced a situation in which currently there is access to data which is too complex to be understood using the tools available for data reduction and presentation. Researchers in these areas are becoming limited by their ability to visualize and comprehend the 3D systems they are measuring and simulating.
Image Analysis of DNA Fiber and Nucleus in Plants.
Ohmido, Nobuko; Wako, Toshiyuki; Kato, Seiji; Fukui, Kiichi
2016-01-01
Advances in cytology have led to the application of a wide range of visualization methods in plant genome studies. Image analysis methods are indispensable tools where morphology, density, and color play important roles in the biological systems. Visualization and image analysis methods are useful techniques in the analyses of the detailed structure and function of extended DNA fibers (EDFs) and interphase nuclei. The EDF is the highest in the spatial resolving power to reveal genome structure and it can be used for physical mapping, especially for closely located genes and tandemly repeated sequences. One the other hand, analyzing nuclear DNA and proteins would reveal nuclear structure and functions. In this chapter, we describe the image analysis protocol for quantitatively analyzing different types of plant genome, EDFs and interphase nuclei.
Metsalu, Tauno; Vilo, Jaak
2015-01-01
The Principal Component Analysis (PCA) is a widely used method of reducing the dimensionality of high-dimensional data, often followed by visualizing two of the components on the scatterplot. Although widely used, the method is lacking an easy-to-use web interface that scientists with little programming skills could use to make plots of their own data. The same applies to creating heatmaps: it is possible to add conditional formatting for Excel cells to show colored heatmaps, but for more advanced features such as clustering and experimental annotations, more sophisticated analysis tools have to be used. We present a web tool called ClustVis that aims to have an intuitive user interface. Users can upload data from a simple delimited text file that can be created in a spreadsheet program. It is possible to modify data processing methods and the final appearance of the PCA and heatmap plots by using drop-down menus, text boxes, sliders etc. Appropriate defaults are given to reduce the time needed by the user to specify input parameters. As an output, users can download PCA plot and heatmap in one of the preferred file formats. This web server is freely available at http://biit.cs.ut.ee/clustvis/. PMID:25969447
Fostering Outreach, Education and Exploration of the Moon Using the Lunar Mapping & Modeling Portal
NASA Astrophysics Data System (ADS)
Dodge, K.; Law, E.; Malhotra, S.; Chang, G.; Kim, R. M.; Bui, B.; Sadaqathullah, S.; Day, B. H.
2014-12-01
The Lunar Mapping and Modeling Portal (LMMP)[1], is a web-based Portal and a suite of interactive visualization and analysis tools for users to access mapped lunar data products (including image mosaics, digital elevation models, etc.) from past and current lunar missions (e.g., Lunar Reconnaissance Orbiter, Apollo, etc.). Originally designed as a mission planning tool for the Constellation Program, LMMP has grown into a generalized suite of tools facilitating a wide range of activities in support of lunar exploration including public outreach, education, lunar mission planning and scientific research. LMMP fosters outreach, education, and exploration of the Moon by educators, students, amateur astronomers, and the general public. These efforts are enhanced by Moon Tours, LMMP's mobile application, which makes LMMP's information accessible to people of all ages, putting opportunities for real lunar exploration in the palms of their hands. Our talk will include an overview of LMMP and a demonstration of its technologies (web portals, mobile apps), to show how it serves NASA data as commodities for use by advanced visualization facilities (e.g., planetariums) and how it contributes to improving teaching and learning, increasing scientific literacy of the general public, and enriching STEM efforts. References:[1] http://www.lmmp.nasa.gov
Unidata Cyberinfrastructure in the Cloud
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.; Young, J. W.
2016-12-01
Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.
Pathway Tools version 19.0 update: software for pathway/genome informatics and systems biology.
Karp, Peter D; Latendresse, Mario; Paley, Suzanne M; Krummenacker, Markus; Ong, Quang D; Billington, Richard; Kothari, Anamika; Weaver, Daniel; Lee, Thomas; Subhraveti, Pallavi; Spaulding, Aaron; Fulcher, Carol; Keseler, Ingrid M; Caspi, Ron
2016-09-01
Pathway Tools is a bioinformatics software environment with a broad set of capabilities. The software provides genome-informatics tools such as a genome browser, sequence alignments, a genome-variant analyzer and comparative-genomics operations. It offers metabolic-informatics tools, such as metabolic reconstruction, quantitative metabolic modeling, prediction of reaction atom mappings and metabolic route search. Pathway Tools also provides regulatory-informatics tools, such as the ability to represent and visualize a wide range of regulatory interactions. This article outlines the advances in Pathway Tools in the past 5 years. Major additions include components for metabolic modeling, metabolic route search, computation of atom mappings and estimation of compound Gibbs free energies of formation; addition of editors for signaling pathways, for genome sequences and for cellular architecture; storage of gene essentiality data and phenotype data; display of multiple alignments, and of signaling and electron-transport pathways; and development of Python and web-services application programming interfaces. Scientists around the world have created more than 9800 Pathway/Genome Databases by using Pathway Tools, many of which are curated databases for important model organisms. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Visualization Tools for Teaching Computer Security
ERIC Educational Resources Information Center
Yuan, Xiaohong; Vega, Percy; Qadah, Yaseen; Archer, Ricky; Yu, Huiming; Xu, Jinsheng
2010-01-01
Using animated visualization tools has been an important teaching approach in computer science education. We have developed three visualization and animation tools that demonstrate various information security concepts and actively engage learners. The information security concepts illustrated include: packet sniffer and related computer network…
2010-01-01
Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB) in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics) is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA structures is provided. RNA FRABASE 2.0 is freely available at http://rnafrabase.cs.put.poznan.pl. Conclusions RNA FRABASE 2.0 provides a novel database and powerful search engine which is equipped with new data and functionalities that are unavailable elsewhere. Our intention is that this advanced version of the RNA FRABASE will be of interest to all researchers working in the RNA field. PMID:20459631
Popenda, Mariusz; Szachniuk, Marta; Blazewicz, Marek; Wasik, Szymon; Burke, Edmund K; Blazewicz, Jacek; Adamiak, Ryszard W
2010-05-06
Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB) in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics) is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA structures is provided. RNA FRABASE 2.0 is freely available at http://rnafrabase.cs.put.poznan.pl. RNA FRABASE 2.0 provides a novel database and powerful search engine which is equipped with new data and functionalities that are unavailable elsewhere. Our intention is that this advanced version of the RNA FRABASE will be of interest to all researchers working in the RNA field.
Visualization and Analytics Tools for Infectious Disease Epidemiology: A Systematic Review
Carroll, Lauren N.; Au, Alan P.; Detwiler, Landon Todd; Fu, Tsung-chieh; Painter, Ian S.; Abernethy, Neil F.
2014-01-01
Background A myriad of new tools and algorithms have been developed to help public health professionals analyze and visualize the complex data used in infectious disease control. To better understand approaches to meet these users' information needs, we conducted a systematic literature review focused on the landscape of infectious disease visualization tools for public health professionals, with a special emphasis on geographic information systems (GIS), molecular epidemiology, and social network analysis. The objectives of this review are to: (1) Identify public health user needs and preferences for infectious disease information visualization tools; (2) Identify existing infectious disease information visualization tools and characterize their architecture and features; (3) Identify commonalities among approaches applied to different data types; and (4) Describe tool usability evaluation efforts and barriers to the adoption of such tools. Methods We identified articles published in English from January 1, 1980 to June 30, 2013 from five bibliographic databases. Articles with a primary focus on infectious disease visualization tools, needs of public health users, or usability of information visualizations were included in the review. Results A total of 88 articles met our inclusion criteria. Users were found to have diverse needs, preferences and uses for infectious disease visualization tools, and the existing tools are correspondingly diverse. The architecture of the tools was inconsistently described, and few tools in the review discussed the incorporation of usability studies or plans for dissemination. Many studies identified concerns regarding data sharing, confidentiality and quality. Existing tools offer a range of features and functions that allow users to explore, analyze, and visualize their data, but the tools are often for siloed applications. Commonly cited barriers to widespread adoption included lack of organizational support, access issues, and misconceptions about tool use. Discussion and Conclusion As the volume and complexity of infectious disease data increases, public health professionals must synthesize highly disparate data to facilitate communication with the public and inform decisions regarding measures to protect the public's health. Our review identified several themes: consideration of users' needs, preferences, and computer literacy; integration of tools into routine workflow; complications associated with understanding and use of visualizations; and the role of user trust and organizational support in the adoption of these tools. Interoperability also emerged as a prominent theme, highlighting challenges associated with the increasingly collaborative and interdisciplinary nature of infectious disease control and prevention. Future work should address methods for representing uncertainty and missing data to avoid misleading users as well as strategies to minimize cognitive overload. PMID:24747356
Visualization and analytics tools for infectious disease epidemiology: a systematic review.
Carroll, Lauren N; Au, Alan P; Detwiler, Landon Todd; Fu, Tsung-Chieh; Painter, Ian S; Abernethy, Neil F
2014-10-01
A myriad of new tools and algorithms have been developed to help public health professionals analyze and visualize the complex data used in infectious disease control. To better understand approaches to meet these users' information needs, we conducted a systematic literature review focused on the landscape of infectious disease visualization tools for public health professionals, with a special emphasis on geographic information systems (GIS), molecular epidemiology, and social network analysis. The objectives of this review are to: (1) identify public health user needs and preferences for infectious disease information visualization tools; (2) identify existing infectious disease information visualization tools and characterize their architecture and features; (3) identify commonalities among approaches applied to different data types; and (4) describe tool usability evaluation efforts and barriers to the adoption of such tools. We identified articles published in English from January 1, 1980 to June 30, 2013 from five bibliographic databases. Articles with a primary focus on infectious disease visualization tools, needs of public health users, or usability of information visualizations were included in the review. A total of 88 articles met our inclusion criteria. Users were found to have diverse needs, preferences and uses for infectious disease visualization tools, and the existing tools are correspondingly diverse. The architecture of the tools was inconsistently described, and few tools in the review discussed the incorporation of usability studies or plans for dissemination. Many studies identified concerns regarding data sharing, confidentiality and quality. Existing tools offer a range of features and functions that allow users to explore, analyze, and visualize their data, but the tools are often for siloed applications. Commonly cited barriers to widespread adoption included lack of organizational support, access issues, and misconceptions about tool use. As the volume and complexity of infectious disease data increases, public health professionals must synthesize highly disparate data to facilitate communication with the public and inform decisions regarding measures to protect the public's health. Our review identified several themes: consideration of users' needs, preferences, and computer literacy; integration of tools into routine workflow; complications associated with understanding and use of visualizations; and the role of user trust and organizational support in the adoption of these tools. Interoperability also emerged as a prominent theme, highlighting challenges associated with the increasingly collaborative and interdisciplinary nature of infectious disease control and prevention. Future work should address methods for representing uncertainty and missing data to avoid misleading users as well as strategies to minimize cognitive overload. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma.
Kasneci, Enkelejda; Black, Alex A; Wood, Joanne M
2017-01-01
To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior.
Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma
Black, Alex A.
2017-01-01
To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior. PMID:28293433
Visual illusion of tool use recalibrates tactile perception
Miller, Luke E.; Longo, Matthew R.; Saygin, Ayse P.
2018-01-01
Brief use of a tool recalibrates multisensory representations of the user’s body, a phenomenon called tool embodiment. Despite two decades of research, little is known about its boundary conditions. It has been widely argued that embodiment requires active tool use, suggesting a critical role for somatosensory and motor feedback. The present study used a visual illusion to cast doubt on this view. We used a mirror-based setup to induce a visual experience of tool use with an arm that was in fact stationary. Following illusory tool use, tactile perception was recalibrated on this stationary arm, and with equal magnitude as physical use. Recalibration was not found following illusory passive tool holding, and could not be accounted for by sensory conflict or general interhemispheric plasticity. These results suggest visual tool-use signals play a critical role in driving tool embodiment. PMID:28196765
Srivastava, Amit K.; Kadayakkara, Deepak K.; Bar-Shir, Amnon; Gilad, Assaf A.; McMahon, Michael T.; Bulte, Jeff W. M.
2015-01-01
The field of molecular and cellular imaging allows molecules and cells to be visualized in vivo non-invasively. It has uses not only as a research tool but in clinical settings as well, for example in monitoring cell-based regenerative therapies, in which cells are transplanted to replace degenerating or damaged tissues, or to restore a physiological function. The success of such cell-based therapies depends on several critical issues, including the route and accuracy of cell transplantation, the fate of cells after transplantation, and the interaction of engrafted cells with the host microenvironment. To assess these issues, it is necessary to monitor transplanted cells non-invasively in real-time. Magnetic resonance imaging (MRI) is a tool uniquely suited to this task, given its ability to image deep inside tissue with high temporal resolution and sensitivity. Extraordinary efforts have recently been made to improve cellular MRI as applied to regenerative medicine, by developing more advanced contrast agents for use as probes and sensors. These advances enable the non-invasive monitoring of cell fate and, more recently, that of the different cellular functions of living cells, such as their enzymatic activity and gene expression, as well as their time point of cell death. We present here a review of recent advancements in the development of these probes and sensors, and of their functioning, applications and limitations. PMID:26035841
Visual programming for next-generation sequencing data analytics.
Milicchio, Franco; Rose, Rebecca; Bian, Jiang; Min, Jae; Prosperi, Mattia
2016-01-01
High-throughput or next-generation sequencing (NGS) technologies have become an established and affordable experimental framework in biological and medical sciences for all basic and translational research. Processing and analyzing NGS data is challenging. NGS data are big, heterogeneous, sparse, and error prone. Although a plethora of tools for NGS data analysis has emerged in the past decade, (i) software development is still lagging behind data generation capabilities, and (ii) there is a 'cultural' gap between the end user and the developer. Generic software template libraries specifically developed for NGS can help in dealing with the former problem, whilst coupling template libraries with visual programming may help with the latter. Here we scrutinize the state-of-the-art low-level software libraries implemented specifically for NGS and graphical tools for NGS analytics. An ideal developing environment for NGS should be modular (with a native library interface), scalable in computational methods (i.e. serial, multithread, distributed), transparent (platform-independent), interoperable (with external software interface), and usable (via an intuitive graphical user interface). These characteristics should facilitate both the run of standardized NGS pipelines and the development of new workflows based on technological advancements or users' needs. We discuss in detail the potential of a computational framework blending generic template programming and visual programming that addresses all of the current limitations. In the long term, a proper, well-developed (although not necessarily unique) software framework will bridge the current gap between data generation and hypothesis testing. This will eventually facilitate the development of novel diagnostic tools embedded in routine healthcare.
Web tools for predictive toxicology model building.
Jeliazkova, Nina
2012-07-01
The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.
An overview of suite for automated global electronic biosurveillance (SAGES)
NASA Astrophysics Data System (ADS)
Lewis, Sheri L.; Feighner, Brian H.; Loschen, Wayne A.; Wojcik, Richard A.; Skora, Joseph F.; Coberly, Jacqueline S.; Blazes, David L.
2012-06-01
Public health surveillance is undergoing a revolution driven by advances in the field of information technology. Many countries have experienced vast improvements in the collection, ingestion, analysis, visualization, and dissemination of public health data. Resource-limited countries have lagged behind due to challenges in information technology infrastructure, public health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES) is a collection of modular, flexible, freely-available software tools for electronic disease surveillance in resource-limited settings. One or more SAGES tools may be used in concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility allows for the development of an inexpensive, customized, and sustainable disease surveillance system. The ability to rapidly assess anomalous disease activity may lead to more efficient use of limited resources and better compliance with World Health Organization International Health Regulations.
Lewis, Sheri L.; Feighner, Brian H.; Loschen, Wayne A.; Wojcik, Richard A.; Skora, Joseph F.; Coberly, Jacqueline S.; Blazes, David L.
2011-01-01
Public health surveillance is undergoing a revolution driven by advances in the field of information technology. Many countries have experienced vast improvements in the collection, ingestion, analysis, visualization, and dissemination of public health data. Resource-limited countries have lagged behind due to challenges in information technology infrastructure, public health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES) is a collection of modular, flexible, freely-available software tools for electronic disease surveillance in resource-limited settings. One or more SAGES tools may be used in concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility allows for the development of an inexpensive, customized, and sustainable disease surveillance system. The ability to rapidly assess anomalous disease activity may lead to more efficient use of limited resources and better compliance with World Health Organization International Health Regulations. PMID:21572957
Screening methods for post-stroke visual impairment: a systematic review.
Hanna, Kerry Louise; Hepworth, Lauren Rachel; Rowe, Fiona
2017-12-01
To provide a systematic overview of the various tools available to screen for post-stroke visual impairment. A review of the literature was conducted including randomised controlled trials, controlled trials, cohort studies, observational studies, systematic reviews and retrospective medical note reviews. All languages were included and translation was obtained. Participants included adults ≥18 years old diagnosed with a visual impairment as a direct cause of a stroke. We searched a broad range of scholarly online resources and hand-searched articles registers of published, unpublished and on-going trials. Search terms included a variety of MESH terms and alternatives in relation to stroke and visual conditions. Study selection was performed by two authors independently. The quality of the evidence and risk of bias were assessed using the STROBE, GRACE and PRISMA statements. A total of 25 articles (n = 2924) were included in this review. Articles appraised reported on tools screening solely for visual impairments or for general post-stroke disabilities inclusive of vision. The majority of identified tools screen for visual perception including visual neglect (VN), with few screening for visual acuity (VA), visual field (VF) loss or ocular motility (OM) defects. Six articles reported on nine screening tools which combined visual screening assessment alongside screening for general stroke disabilities. Of these, three included screening for VA; three screened for VF loss; three screened for OM defects and all screened for VN. Two tools screened for all visual impairments. A further 19 articles were found which reported on individual vision screening tests in stroke populations; two for VF loss; 11 for VN and six for other visual perceptual defects. Most tools cannot accurately account for those with aphasia or communicative deficits, which are common problems following a stroke. There is currently no standardised visual screening tool which can accurately assess all potential post-stroke visual impairments. The current tools screen for only a number of potential stroke-related impairments, which means many visual defects may be missed. The sensitivity of those which screen for all impairments is significantly lowered when patients are unable to report their visual symptoms. Future research is required to develop a tool capable of assessing stroke patients which encompasses all potential visual deficits and can also be easily performed by both the patients and administered by health care professionals in order to ensure all stroke survivors with visual impairment are accurately identified and managed. Implications for Rehabilitation Over 65% of stroke survivors will suffer from a visual impairment, whereas 45% of stroke units do not assess vision. Visual impairment significantly reduces the quality of life, such as being unable to return to work, driving and depression. This review outlines the available screening methods to accurately identify stroke survivors with visual impairments. Identifying visual impairment after stroke can aid general rehabilitation and thus, improve the quality of life for these patients.
Enabling scientific workflows in virtual reality
Kreylos, O.; Bawden, G.; Bernardin, T.; Billen, M.I.; Cowgill, E.S.; Gold, R.D.; Hamann, B.; Jadamec, M.; Kellogg, L.H.; Staadt, O.G.; Sumner, D.Y.
2006-01-01
To advance research and improve the scientific return on data collection and interpretation efforts in the geosciences, we have developed methods of interactive visualization, with a special focus on immersive virtual reality (VR) environments. Earth sciences employ a strongly visual approach to the measurement and analysis of geologic data due to the spatial and temporal scales over which such data ranges, As observations and simulations increase in size and complexity, the Earth sciences are challenged to manage and interpret increasing amounts of data. Reaping the full intellectual benefits of immersive VR requires us to tailor exploratory approaches to scientific problems. These applications build on the visualization method's strengths, using both 3D perception and interaction with data and models, to take advantage of the skills and training of the geological scientists exploring their data in the VR environment. This interactive approach has enabled us to develop a suite of tools that are adaptable to a range of problems in the geosciences and beyond. Copyright ?? 2008 by the Association for Computing Machinery, Inc.
Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher R. Johnson, Charles D. Hansen
2001-10-29
The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less
NASA Astrophysics Data System (ADS)
Whitford, Dennis J.
2002-05-01
Ocean waves are the most recognized phenomena in oceanography. Unfortunately, undergraduate study of ocean wave dynamics and forecasting involves mathematics and physics and therefore can pose difficulties with some students because of the subject's interrelated dependence on time and space. Verbal descriptions and two-dimensional illustrations are often insufficient for student comprehension. Computer-generated visualization and animation offer a visually intuitive and pedagogically sound medium to present geoscience, yet there are very few oceanographic examples. A two-part article series is offered to explain ocean wave forecasting using computer-generated visualization and animation. This paper, Part 1, addresses forecasting of sea wave conditions and serves as the basis for the more difficult topic of swell wave forecasting addressed in Part 2. Computer-aided visualization and animation, accompanied by oral explanation, are a welcome pedagogical supplement to more traditional methods of instruction. In this article, several MATLAB ® software programs have been written to visualize and animate development and comparison of wave spectra, wave interference, and forecasting of sea conditions. These programs also set the stage for the more advanced and difficult animation topics in Part 2. The programs are user-friendly, interactive, easy to modify, and developed as instructional tools. By using these software programs, teachers can enhance their instruction of these topics with colorful visualizations and animation without requiring an extensive background in computer programming.
Homman-Ludiye, Jihane; Bourne, James A.
2014-01-01
The integration of the visual stimulus takes place at the level of the neocortex, organized in anatomically distinct and functionally unique areas. Primates, including humans, are heavily dependent on vision, with approximately 50% of their neocortical surface dedicated to visual processing and possess many more visual areas than any other mammal, making them the model of choice to study visual cortical arealisation. However, in order to identify the mechanisms responsible for patterning the developing neocortex, specifying area identity as well as elucidate events that have enabled the evolution of the complex primate visual cortex, it is essential to gain access to the cortical maps of alternative species. To this end, species including the mouse have driven the identification of cellular markers, which possess an area-specific expression profile, the development of new tools to label connections and technological advance in imaging techniques enabling monitoring of cortical activity in a behaving animal. In this review we present non-primate species that have contributed to elucidating the evolution and development of the visual cortex. We describe the current understanding of the mechanisms supporting the establishment of areal borders during development, mainly gained in the mouse thanks to the availability of genetically modified lines but also the limitations of the mouse model and the need for alternate species. PMID:25071460
NASA Astrophysics Data System (ADS)
Taber, J.; Bahavar, M.; Bravo, T. K.; Butler, R. F.; Kilb, D. L.; Trabant, C.; Woodward, R.; Ammon, C. J.
2011-12-01
Data from dense seismic arrays can be used to visualize the propagation of seismic waves, resulting in animations effective for teaching both general and advanced audiences. One of the first visualizations of this type was developed using Objective C code and EarthScope/USArray data, which was then modified and ported to the Matlab platform and has now been standardized and automated as an IRIS Data Management System (IRIS-DMS) data product. These iterative code developments and improvements were completed by C. Ammon, R. Woodward and M. Bahavar, respectively. Currently, an automated script creates Ground Motion Visualizations (GMVs) for all global earthquakes over magnitude 6 recorded by EarthScope's USArray Transportable Array (USArray TA) network. The USArray TA network is a rolling array of 400 broadband stations deployed on a uniform 70-km grid. These near real-time GMV visualizations are typically available for download within 4 hours or less of their occurrence (see: www.iris.edu/dms/products/usarraygmv/). The IRIS-DMS group has recently added a feature that allows users to highlight key elements within the GMVs, by providing an online tool for creating customized GMVs. This new interface allows users to select the stations, channels, and time window of interest, adjust the mapped areal extent of the view, and specify high and low pass filters. An online tutorial available from the IRIS Education and Public Outreach (IRIS-EPO) website, listed below, steps through a teaching sequence that can be used to explain the basic features of the GMVs. For example, they can be used to demonstrate simple concepts such as relative P, S and surface wave velocities and corresponding wavelengths for middle-school students, or more advanced concepts such as the influence of focal mechanism on waveforms, or how seismic waves converge at an earthquake's antipode. For those who desire a greater level of customization, including the ability to use the GMV framework with data sets not stored within the IRIS-DMS, the Matlab GMV code is now also available from the IRIS-DMS website. These GMV codes have been applied to sac-formatted data from the Quake Catcher Network (QCN). Through a collaboration between NSF-funded programs and projects (e.g., IRIS and QCN) we are striving to make these codes user friendly enough to be routinely incorporated in undergraduate and graduate seismology classes. In this way, we will help provide a research tool for students to explore never-looked-at-before data, similar to actual seismology research. As technology is advancing quickly, we now have more data than seismologists can easily examine. Given this, we anticipate students using our codes can perform a 'citizen scientist' role in that they can help us identify key signals within the unexamined vast data streams we are acquiring.
Learn to Teach Chemistry Using Visual Media Tools
ERIC Educational Resources Information Center
Turkoguz, Suat
2012-01-01
The aim of this study was to investigate undergraduate students' attitudes to using visual media tools in the chemistry laboratory. One hundred and fifteen undergraduates studying science education at Dokuz Eylul University, Turkey participated in the study. They video-recorded chemistry experiments with visual media tools and assessed them on a…
Use of digital photography for power plant retrofits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamba, J.J.
1995-09-01
One of the latest advancements in electronic tools for reducing engineering and drafting effort is the use of digital photography (DP) for retrofit and betterment projects at fossil and nuclear power plants. Sargent and Lundy (S and L) has effectively used digital photography for condition assessments, minor backfit repairs, thermo-lag fire wrap assessments and repairs, and other applications. Digital photography offers several benefits on these types of projects including eliminating the need for official repair drawings and providing station maintenance with a true 3-D visualization of the repair.
Through the High-Tech Looking Glass | Center for Cancer Research
Science begins with observation; scientists have made telescopes to examine things farther away than the eye can see and microscopes to examine things invisible to human vision. Since Robert Hooke in the 17th century used the first microscope to document the existence of living cells, advances in cell biology have been tied to ever more innovative tools for visualizing and analyzing the microscopic world. CCR scientists continue to creatively expand the boundaries of observation to answer longstanding and diverse questions about the inner workings of cells.
Study on generation and sharing of on-demand global seamless data—Taking MODIS NDVI as an example
NASA Astrophysics Data System (ADS)
Shen, Dayong; Deng, Meixia; Di, Liping; Han, Weiguo; Peng, Chunming; Yagci, Ali Levent; Yu, Genong; Chen, Zeqiang
2013-04-01
By applying advanced Geospatial Data Abstraction Library (GDAL) and BigTIFF technology in a Geographical Information System (GIS) with Service Oriented Architecture (SOA), this study has derived global datasets using tile-based input data and implemented Virtual Web Map Service (VWMS) and Virtual Web Coverage Service (VWCS) to provide software tools for visualization and acquisition of global data. Taking MODIS Normalized Difference Vegetation Index (NDVI) as an example, this study proves the feasibility, efficiency and features of the proposed approach.
Contributing Data to the Fleet DNA Project (Brochure)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-09-01
The Fleet DNA clearinghouse of commercial fleet transportation data helps vehicle manufacturers and developers optimize vehicle designs and helps fleet managers choose advanced technologies for their fleets. This online tool - available at www.nrel.gov/fleetdna - provides data summaries and visualizations similar to real-world 'genetics' for medium- and heavy-duty commercial fleet vehicles operating within a variety of vocations. To contribute your fleet data, please contact Adam Duran of the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) at adam.duran@nrel.gov or 303-275-4586.
The evolving trend in spacecraft health analysis
NASA Technical Reports Server (NTRS)
Kirkpatrick, Russell L.
1993-01-01
The Space Flight Operations Center inaugurated the concept of a central data repository for spacecraft data and the distribution of computing power to the end users for that data's analysis at the Jet Propulsion Laboratory. The Advanced Multimission Operations System is continuing the evolution of this concept as new technologies emerge. Constant improvements in data management tools, data visualization, and hardware lead to ever expanding ideas for improving the analysis of spacecraft health in an era of budget constrained mission operations systems. The foundation of this evolution, its history, and its current plans will be discussed.
Web-based visual analysis for high-throughput genomics
2013-01-01
Background Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. Results We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Conclusions Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput genomics experiments. PMID:23758618
SpecViz: Interactive Spectral Data Analysis
NASA Astrophysics Data System (ADS)
Earl, Nicholas Michael; STScI
2016-06-01
The astronomical community is about to enter a new generation of scientific enterprise. With next-generation instrumentation and advanced capabilities, the need has arisen to equip astronomers with the necessary tools to deal with large, multi-faceted data. The Space Telescope Science Institute has initiated a data analysis forum for the creation, development, and maintenance of software tools for the interpretation of these new data sets. SpecViz is a spectral 1-D interactive visualization and analysis application built with Python in an open source development environment. A user-friendly GUI allows for a fast, interactive approach to spectral analysis. SpecViz supports handling of unique and instrument-specific data, incorporation of advanced spectral unit handling and conversions in a flexible, high-performance interactive plotting environment. Active spectral feature analysis is possible through interactive measurement and statistical tools. It can be used to build wide-band SEDs, with the capability of combining or overplotting data products from various instruments. SpecViz sports advanced toolsets for filtering and detrending spectral lines; identifying, isolating, and manipulating spectral features; as well as utilizing spectral templates for renormalizing data in an interactive way. SpecViz also includes a flexible model fitting toolset that allows for multi-component models, as well as custom models, to be used with various fitting and decomposition routines. SpecViz also features robust extension via custom data loaders and connection to the central communication system underneath the interface for more advanced control. Incorporation with Jupyter notebooks via connection with the active iPython kernel allows for SpecViz to be used in addition to a user’s normal workflow without demanding the user drastically alter their method of data analysis. In addition, SpecViz allows the interactive analysis of multi-object spectroscopy in the same straight-forward, consistent way. Through the development of such tools, STScI hopes to unify astronomical data analysis software for JWST and other instruments, allowing for efficient, reliable, and consistent scientific results.
Updates in metabolomics tools and resources: 2014-2015.
Misra, Biswapriya B; van der Hooft, Justin J J
2016-01-01
Data processing and interpretation represent the most challenging and time-consuming steps in high-throughput metabolomic experiments, regardless of the analytical platforms (MS or NMR spectroscopy based) used for data acquisition. Improved machinery in metabolomics generates increasingly complex datasets that create the need for more and better processing and analysis software and in silico approaches to understand the resulting data. However, a comprehensive source of information describing the utility of the most recently developed and released metabolomics resources--in the form of tools, software, and databases--is currently lacking. Thus, here we provide an overview of freely-available, and open-source, tools, algorithms, and frameworks to make both upcoming and established metabolomics researchers aware of the recent developments in an attempt to advance and facilitate data processing workflows in their metabolomics research. The major topics include tools and researches for data processing, data annotation, and data visualization in MS and NMR-based metabolomics. Most in this review described tools are dedicated to untargeted metabolomics workflows; however, some more specialist tools are described as well. All tools and resources described including their analytical and computational platform dependencies are summarized in an overview Table. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cloud-Based Tools to Support High-Resolution Modeling (Invited)
NASA Astrophysics Data System (ADS)
Jones, N.; Nelson, J.; Swain, N.; Christensen, S.
2013-12-01
The majority of watershed models developed to support decision-making by water management agencies are simple, lumped-parameter models. Maturity in research codes and advances in the computational power from multi-core processors on desktop machines, commercial cloud-computing resources, and supercomputers with thousands of cores have created new opportunities for employing more accurate, high-resolution distributed models for routine use in decision support. The barriers for using such models on a more routine basis include massive amounts of spatial data that must be processed for each new scenario and lack of efficient visualization tools. In this presentation we will review a current NSF-funded project called CI-WATER that is intended to overcome many of these roadblocks associated with high-resolution modeling. We are developing a suite of tools that will make it possible to deploy customized web-based apps for running custom scenarios for high-resolution models with minimal effort. These tools are based on a software stack that includes 52 North, MapServer, PostGIS, HT Condor, CKAN, and Python. This open source stack provides a simple scripting environment for quickly configuring new custom applications for running high-resolution models as geoprocessing workflows. The HT Condor component facilitates simple access to local distributed computers or commercial cloud resources when necessary for stochastic simulations. The CKAN framework provides a powerful suite of tools for hosting such workflows in a web-based environment that includes visualization tools and storage of model simulations in a database to archival, querying, and sharing of model results. Prototype applications including land use change, snow melt, and burned area analysis will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1135482
Conceptual design study for an advanced cab and visual system, volume 2
NASA Technical Reports Server (NTRS)
Rue, R. J.; Cyrus, M. L.; Garnett, T. A.; Nachbor, J. W.; Seery, J. A.; Starr, R. L.
1980-01-01
The performance, design, construction and testing requirements are defined for developing an advanced cab and visual system. The rotorcraft system integration simulator is composed of the advanced cab and visual system and the rotorcraft system motion generator, and is part of an existing simulation facility. User's applications for the simulator include rotorcraft design development, product improvement, threat assessment, and accident investigation.
Tools for Coordinated Planning Between Observatories
NASA Technical Reports Server (NTRS)
Jones, Jeremy; Fishman, Mark; Grella, Vince; Kerbel, Uri; Maks, Lori; Misra, Dharitri; Pell, Vince; Powers, Edward I. (Technical Monitor)
2001-01-01
With the realization of NASA's era of great observatories, there are now more than three space-based telescopes operating in different wavebands. This situation provides astronomers with a unique opportunity to simultaneously observe with multiple observatories. Yet scheduling multiple observatories simultaneously is highly inefficient when compared to observations using only one single observatory. Thus, programs using multiple observatories are limited not due to scientific restrictions, but due to operational inefficiencies. At present, multi-observatory programs are conducted by submitting observing proposals separately to each concerned observatory. To assure that the proposed observations can be scheduled, each observatory's staff has to check that the observations are valid and meet all the constraints for their own observatory; in addition, they have to verify that the observations satisfy the constraints of the other observatories. Thus, coordinated observations require painstaking manual collaboration among the observatory staff at each observatory. Due to the lack of automated tools for coordinated observations, this process is time consuming, error-prone, and the outcome of the requests is not certain until the very end. To increase observatory operations efficiency, such manpower intensive processes need to undergo re-engineering. To overcome this critical deficiency, Goddard Space Flight Center's Advanced Architectures and Automation Branch is developing a prototype effort called the Visual Observation Layout Tool (VOLT). The main objective of the VOLT project is to provide visual tools to help automate the planning of coordinated observations by multiple astronomical observatories, as well as to increase the scheduling probability of all observations.
Sun, Shulei; Chen, Jing; Li, Weizhong; Altintas, Ilkay; Lin, Abel; Peltier, Steve; Stocks, Karen; Allen, Eric E.; Ellisman, Mark; Grethe, Jeffrey; Wooley, John
2011-01-01
The Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA, http://camera.calit2.net/) is a database and associated computational infrastructure that provides a single system for depositing, locating, analyzing, visualizing and sharing data about microbial biology through an advanced web-based analysis portal. CAMERA collects and links metadata relevant to environmental metagenome data sets with annotation in a semantically-aware environment allowing users to write expressive semantic queries against the database. To meet the needs of the research community, users are able to query metadata categories such as habitat, sample type, time, location and other environmental physicochemical parameters. CAMERA is compliant with the standards promulgated by the Genomic Standards Consortium (GSC), and sustains a role within the GSC in extending standards for content and format of the metagenomic data and metadata and its submission to the CAMERA repository. To ensure wide, ready access to data and annotation, CAMERA also provides data submission tools to allow researchers to share and forward data to other metagenomics sites and community data archives such as GenBank. It has multiple interfaces for easy submission of large or complex data sets, and supports pre-registration of samples for sequencing. CAMERA integrates a growing list of tools and viewers for querying, analyzing, annotating and comparing metagenome and genome data. PMID:21045053
Sun, Shulei; Chen, Jing; Li, Weizhong; Altintas, Ilkay; Lin, Abel; Peltier, Steve; Stocks, Karen; Allen, Eric E; Ellisman, Mark; Grethe, Jeffrey; Wooley, John
2011-01-01
The Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA, http://camera.calit2.net/) is a database and associated computational infrastructure that provides a single system for depositing, locating, analyzing, visualizing and sharing data about microbial biology through an advanced web-based analysis portal. CAMERA collects and links metadata relevant to environmental metagenome data sets with annotation in a semantically-aware environment allowing users to write expressive semantic queries against the database. To meet the needs of the research community, users are able to query metadata categories such as habitat, sample type, time, location and other environmental physicochemical parameters. CAMERA is compliant with the standards promulgated by the Genomic Standards Consortium (GSC), and sustains a role within the GSC in extending standards for content and format of the metagenomic data and metadata and its submission to the CAMERA repository. To ensure wide, ready access to data and annotation, CAMERA also provides data submission tools to allow researchers to share and forward data to other metagenomics sites and community data archives such as GenBank. It has multiple interfaces for easy submission of large or complex data sets, and supports pre-registration of samples for sequencing. CAMERA integrates a growing list of tools and viewers for querying, analyzing, annotating and comparing metagenome and genome data.
Grandl, Susanne; Sztrókay-Gaul, Anikó; Mittone, Alberto; Gasilov, Sergey; Brun, Emmanuel; Bravin, Alberto; Mayr, Doris; Auweter, Sigrid D; Hellerhoff, Karin; Reiser, Maximilian; Coan, Paola
2016-01-01
Neoadjuvant chemotherapy is the state-of-the-art treatment in advanced breast cancer. A correct visualization of the post-therapeutic tumor size is of high prognostic relevance. X-ray phase-contrast computed tomography (PC-CT) has been shown to provide improved soft-tissue contrast at a resolution formerly restricted to histopathology, at low doses. This study aimed at assessing ex-vivo the potential use of PC-CT for visualizing the effects of neoadjuvant chemotherapy on breast carcinoma. The analysis was performed on two ex-vivo formalin-fixed mastectomy samples containing an invasive carcinoma removed from two patients treated with neoadjuvant chemotherapy. Images were matched with corresponding histological slices. The visibility of typical post-therapeutic tissue changes was assessed and compared to results obtained with conventional clinical imaging modalities. PC-CT depicted the different tissue types with an excellent correlation to histopathology. Post-therapeutic tissue changes were correctly visualized and the residual tumor mass could be detected. PC-CT outperformed clinical imaging modalities in the detection of chemotherapy-induced tissue alterations including post-therapeutic tumor size. PC-CT might become a unique diagnostic tool in the prediction of tumor response to neoadjuvant chemotherapy. PC-CT might be used to assist during histopathological diagnosis, offering a high-resolution and high-contrast virtual histological tool for the accurate delineation of tumor boundaries.
Gori, Simone; Molteni, Massimo; Facoetti, Andrea
2016-01-01
A visual illusion refers to a percept that is different in some aspect from the physical stimulus. Illusions are a powerful non-invasive tool for understanding the neurobiology of vision, telling us, indirectly, how the brain processes visual stimuli. There are some neurodevelopmental disorders characterized by visual deficits. Surprisingly, just a few studies investigated illusory perception in clinical populations. Our aim is to review the literature supporting a possible role for visual illusions in helping us understand the visual deficits in developmental dyslexia and autism spectrum disorder. Future studies could develop new tools – based on visual illusions – to identify an early risk for neurodevelopmental disorders. PMID:27199702
Idiopathic Intracranial Hypertension in Children and Adolescents: An Update.
Cleves-Bayon, Catalina
2018-03-01
Idiopathic intracranial hypertension (IIH), previously known as pseudotumor cerebri syndrome (PTC) is a serious neurological disorder that can lead to irreversible visual loss. Predominantly a disorder affecting women in reproductive years, the pediatric population is not spared. In the past few years, the condition has been redefined, due to new accepted values for opening pressure in children and advances in neuroimaging. Emerging techniques in ophthalmology are being increasingly used to monitor disease in these patients. And, although the treatment tools have not changed in several years, important evidence for efficacy for acetazolamide finally came to light in recent years in the Idiopathic Intracranial Hypertension Treatment Trial (IIHTT). This review article provides an overview on recent advances in diagnosis, evaluation and treatment of IIH. © 2017 American Headache Society.
England, Andrew; Cassidy, Simon; Eachus, Peter; Dominguez, Alejandro; Hogg, Peter
2016-01-01
Objective: The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. Methods: Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. Results: A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). Conclusion: This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. Advances in knowledge: This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality. PMID:26943836
Prior, Fred W; Erickson, Bradley J; Tarbox, Lawrence
2007-11-01
The Cancer Bioinformatics Grid (caBIG) program was created by the National Cancer Institute to facilitate sharing of IT infrastructure, data, and applications among the National Cancer Institute-sponsored cancer research centers. The program was launched in February 2004 and now links more than 50 cancer centers. In April 2005, the In Vivo Imaging Workspace was added to promote the use of imaging in cancer clinical trials. At the inaugural meeting, four special interest groups (SIGs) were established. The Software SIG was charged with identifying projects that focus on open-source software for image visualization and analysis. To date, two projects have been defined by the Software SIG. The eXtensible Imaging Platform project has produced a rapid application development environment that researchers may use to create targeted workflows customized for specific research projects. The Algorithm Validation Tools project will provide a set of tools and data structures that will be used to capture measurement information and associated needed to allow a gold standard to be defined for the given database against which change analysis algorithms can be tested. Through these and future efforts, the caBIG In Vivo Imaging Workspace Software SIG endeavors to advance imaging informatics and provide new open-source software tools to advance cancer research.
ERIC Educational Resources Information Center
Ecoma, Victor
2016-01-01
The paper reflects upon the tools, approaches and applications of visual literacy in the Visual Arts Department of Cross River University of Technology, Calabar, Nigeria. The objective of the discourse is to examine how the visual arts training and practice equip students with skills in visual literacy through methods of production, materials and…
NGL Viewer: Web-based molecular graphics for large complexes.
Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W
2018-05-29
The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.
Are Visual Informatics Actually Useful in Practice: A Study in a Film Studies Context
NASA Astrophysics Data System (ADS)
Mohamad Ali, Nazlena; Smeaton, Alan F.
This paper describes our work in examining the question of whether providing a visual informatics application in an educational scenario, in particular, providing video content analysis, does actually yield real benefit in practice. We provide a new software tool in the domain of movie content analysis technologies for use by students of film studies students at Dublin City University, and we try to address the research question of measuring the 'benefit' from the use of these technologies to students. We examine their real practices in studying for the module using our advanced application as compared to using conventional DVD browsing of movie content. In carrying out this experiment, we found that students have better essay outcomes, higher satisfactions levels and the mean time spent on movie analyzing is longer with the new technologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naqvi, S
2014-06-15
Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physicalmore » principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as virtual experiments that give deeper and long lasting understanding of core principles. The student can then make sound judgements in novel situations encountered beyond routine clinical activities.« less
New endoscopic and cytologic tools for cancer surveillance in the digestive tract
Brentnall, Teresa A.; Dominitz, Jason A.
2009-01-01
Synopsis Cancer surveillance is an increasing part of everyday practice in gastrointestinal endoscopy due to the identification of high risk groups from genetic and biomarker testing, genealogic and epidemiologic studies, and the increasing number of cancer survivors. An efficient surveillance program requires a cost-effective means for image-guided cancer detection and biopsy. A laser-based tethered-capsule endoscope with enhanced spectral imaging is introduced for unsedated surveillance of the lower esophagus. An ultrathin version of this same endoscope technology provides a 1.2-mm guidewire with imaging capability and cannula-style tools are proposed for image-guided biopsy. Advanced 3D cell visualization techniques are described for increasing the sensitivity of early cancer diagnosis from hematoxylin-stained cells sampled from the pancreatic and biliary ducts. PMID:19423026
Some Observations on the Current Status of Performing Finite Element Analyses
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Knight, Norman F., Jr; Shivakumar, Kunigal N.
2015-01-01
Aerospace structures are complex high-performance structures. Advances in reliable and efficient computing and modeling tools are enabling analysts to consider complex configurations, build complex finite element models, and perform analysis rapidly. Many of the early career engineers of today are very proficient in the usage of modern computers, computing engines, complex software systems, and visualization tools. These young engineers are becoming increasingly efficient in building complex 3D models of complicated aerospace components. However, the current trends demonstrate blind acceptance of the results of the finite element analysis results. This paper is aimed at raising an awareness of this situation. Examples of the common encounters are presented. To overcome the current trends, some guidelines and suggestions for analysts, senior engineers, and educators are offered.
Single Cell Gene Expression Profiling of Skeletal Muscle-Derived Cells.
Gatto, Sole; Puri, Pier Lorenzo; Malecova, Barbora
2017-01-01
Single cell gene expression profiling is a fundamental tool for studying the heterogeneity of a cell population by addressing the phenotypic and functional characteristics of each cell. Technological advances that have coupled microfluidic technologies with high-throughput quantitative RT-PCR analyses have enabled detailed analyses of single cells in various biological contexts. In this chapter, we describe the procedure for isolating the skeletal muscle interstitial cells termed Fibro-Adipogenic Progenitors (FAPs ) and their gene expression profiling at the single cell level. Moreover, we accompany our bench protocol with bioinformatics analysis designed to process raw data as well as to visualize single cell gene expression data. Single cell gene expression profiling is therefore a useful tool in the investigation of FAPs heterogeneity and their contribution to muscle homeostasis.
Earth and Space Science Informatics: Raising Awareness of the Scientists and the Public
NASA Astrophysics Data System (ADS)
Messerotti, M.; Cobabe-Ammann, E.
2009-04-01
The recent developments in Earth and Space Science Informatics led to the availability of advanced tools for data search, visualization and analysis through e.g. the Virtual Observatories or distributed data handling infrastructures. Such facilities are accessible via web interfaces and allow refined data handling to be carried out. Notwithstanding, to date their use is not exploited by the scientific community for a variety of reasons that we will analyze in this work by considering viable strategies to overcome the issue. Similarly, such facilities are powerful tools for teaching and for popularization provided that e-learning programs involving the teachers and respectively the communicators are made available. In this context we will consider the present activities and projects by stressing the role and the legacy of the Electronic Geophysical Year.
NASA Astrophysics Data System (ADS)
Demir, I.; Krajewski, W. F.; Goska, R.; Mantilla, R.; Weber, L. J.; Young, N.
2011-12-01
The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 500 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.
Flood Risk Management in Iowa through an Integrated Flood Information System
NASA Astrophysics Data System (ADS)
Demir, Ibrahim; Krajewski, Witold
2013-04-01
The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 1100 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview and live demonstration of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.
Filming the invisible - time-resolved visualization of compressible flows
NASA Astrophysics Data System (ADS)
Kleine, H.
2010-04-01
Essentially all processes in gasdynamics are invisible to the naked eye as they occur in a transparent medium. The task to observe them is further complicated by the fact that most of these processes are also transient, often with characteristic times that are considerably below the threshold of human perception. Both difficulties can be overcome by combining visualization methods that reveal changes in the transparent medium, and high-speed photography techniques that “stop” the motion of the flow. The traditional approach is to reconstruct a transient process from a series of single images, each taken in a different experiment at a different instant. This approach, which is still widely used today, can only be expected to give reliable results when the process is reproducible. Truly time-resolved visualization, which yields a sequence of flow images in a single experiment, has been attempted for more than a century, but many of the developed camera systems were characterized by a high level of complexity and limited quality of the results. Recent advances in digital high-speed photography have changed this situation and have provided the tools to investigate, with relative ease and in sufficient detail, the true development of a transient flow with characteristic time scales down to one microsecond. This paper discusses the potential and the limitations one encounters when using density-sensitive visualization techniques in time-resolved mode. Several examples illustrate how this approach can reveal and explain a number of previously undetected phenomena in a variety of highly transient compressible flows. It is demonstrated that time-resolved visualization offers numerous advantages which normally outweigh its shortcomings, mainly the often-encountered loss in resolution. Apart from the capability to track the location and/or shape of flow features in space and time, adequate time-resolved visualization allows one to observe the development of deliberately introduced near-isentropic perturbation wavelets. This new diagnostic tool can be used to qualitatively and quantitatively determine otherwise inaccessible thermodynamic properties of a compressible flow.
Advanced pigment dispersion glaucoma secondary to phakic intraocular collamer lens implant.
Ye, Clara; Patel, Cajal K; Momont, Anna C; Liu, Yao
2018-06-01
We report a case of pigment dispersion glaucoma secondary to uncomplicated phakic intraocular collamer lens (ICL) (Visian ICL™, Staar Inc., Monrovia, CA) implant that resulted in advanced visual field loss. A 50-year-old man presented for routine follow-up status post bilateral phakic intraocular collamer lens (ICL) placement 8 years earlier. He was incidentally found to have a decline in visual acuity from an anterior subcapsular cataract and elevated intraocular pressure (IOP) in the left eye. There were signs of pigment dispersion and no evidence of angle closure. Diffuse optic nerve thinning was consistent with advanced glaucomatous visual field defects. Pigment dispersion was also present in the patient's right eye, but without elevated IOP or visual field defects. The patient was treated with topical glaucoma medications and the phakic ICL in the left eye was removed concurrently with cataract surgery to prevent further visual field loss. Pigment dispersion glaucoma is a serious adverse outcome after phakic ICL implantation and regular post-operative monitoring may prevent advanced visual field loss.
NASA Astrophysics Data System (ADS)
Stone, S.; Parker, M. S.; Howe, B.; Lazowska, E.
2015-12-01
Rapid advances in technology are transforming nearly every field from "data-poor" to "data-rich." The ability to extract knowledge from this abundance of data is the cornerstone of 21st century discovery. At the University of Washington eScience Institute, our mission is to engage researchers across disciplines in developing and applying advanced computational methods and tools to real world problems in data-intensive discovery. Our research team consists of individuals with diverse backgrounds in domain sciences such as astronomy, oceanography and geology, with complementary expertise in advanced statistical and computational techniques such as data management, visualization, and machine learning. Two key elements are necessary to foster careers in data science: individuals with cross-disciplinary training in both method and domain sciences, and career paths emphasizing alternative metrics for advancement. We see persistent and deep-rooted challenges for the career paths of people whose skills, activities and work patterns don't fit neatly into the traditional roles and success metrics of academia. To address these challenges the eScience Institute has developed training programs and established new career opportunities for data-intensive research in academia. Our graduate students and post-docs have mentors in both a methodology and an application field. They also participate in coursework and tutorials to advance technical skill and foster community. Professional Data Scientist positions were created to support research independence while encouraging the development and adoption of domain-specific tools and techniques. The eScience Institute also supports the appointment of faculty who are innovators in developing and applying data science methodologies to advance their field of discovery. Our ultimate goal is to create a supportive environment for data science in academia and to establish global recognition for data-intensive discovery across all fields.
Model-Based Reasoning: Using Visual Tools to Reveal Student Learning
ERIC Educational Resources Information Center
Luckie, Douglas; Harrison, Scott H.; Ebert-May, Diane
2011-01-01
Using visual models is common in science and should become more common in classrooms. Our research group has developed and completed studies on the use of a visual modeling tool, the Concept Connector. This modeling tool consists of an online concept mapping Java applet that has automatic scoring functions we refer to as Robograder. The Concept…
A Visual Training Tool for Teaching Kanji to Children with Developmental Dyslexia
ERIC Educational Resources Information Center
Ikeshita-Yamazoe, Hanae; Miyao, Masutomo
2016-01-01
We developed a visual training tool to assist children with developmental dyslexia in learning to recognize and understand Chinese characters (kanji). The visual training tool presents the strokes of a kanji character as separate shapes and requires students to use these fragments to construct the character. Two types of experiments were conducted…
An Exploratory Study of Interactivity in Visualization Tools: "Flow" of Interaction
ERIC Educational Resources Information Center
Liang, Hai-Ning; Parsons, Paul C.; Wu, Hsien-Chi; Sedig, Kamran
2010-01-01
This paper deals with the design of interactivity in visualization tools. There are several factors that can be used to guide the analysis and design of the interactivity of these tools. One such factor is flow, which is concerned with the duration of interaction with visual representations of information--interaction being the actions performed…
AR4VI: AR as an Accessibility Tool for People with Visual Impairments
Coughlan, James M.; Miele, Joshua
2017-01-01
Although AR technology has been largely dominated by visual media, a number of AR tools using both visual and auditory feedback have been developed specifically to assist people with low vision or blindness – an application domain that we term Augmented Reality for Visual Impairment (AR4VI). We describe two AR4VI tools developed at Smith-Kettlewell, as well as a number of pre-existing examples. We emphasize that AR4VI is a powerful tool with the potential to remove or significantly reduce a range of accessibility barriers. Rather than being restricted to use by people with visual impairments, AR4VI is a compelling universal design approach offering benefits for mainstream applications as well. PMID:29303163
AR4VI: AR as an Accessibility Tool for People with Visual Impairments.
Coughlan, James M; Miele, Joshua
2017-10-01
Although AR technology has been largely dominated by visual media, a number of AR tools using both visual and auditory feedback have been developed specifically to assist people with low vision or blindness - an application domain that we term Augmented Reality for Visual Impairment (AR4VI). We describe two AR4VI tools developed at Smith-Kettlewell, as well as a number of pre-existing examples. We emphasize that AR4VI is a powerful tool with the potential to remove or significantly reduce a range of accessibility barriers. Rather than being restricted to use by people with visual impairments, AR4VI is a compelling universal design approach offering benefits for mainstream applications as well.
Applying Strategic Visualization(Registered Trademark) to Lunar and Planetary Mission Design
NASA Technical Reports Server (NTRS)
Frassanito, John R.; Cooke, D. R.
2002-01-01
NASA teams, such as the NASA Exploration Team (NEXT), utilize advanced computational visualization processes to develop mission designs and architectures for lunar and planetary missions. One such process, Strategic Visualization (trademark), is a tool used extensively to help mission designers visualize various design alternatives and present them to other participants of their team. The participants, which may include NASA, industry, and the academic community, are distributed within a virtual network. Consequently, computer animation and other digital techniques provide an efficient means to communicate top-level technical information among team members. Today,Strategic Visualization(trademark) is used extensively both in the mission design process within the technical community, and to communicate the value of space exploration to the general public. Movies and digital images have been generated and shown on nationally broadcast television and the Internet, as well as in magazines and digital media. In our presentation will show excerpts of a computer-generated animation depicting the reference Earth/Moon L1 Libration Point Gateway architecture. The Gateway serves as a staging corridor for human expeditions to the lunar poles and other surface locations. Also shown are crew transfer systems and current reference lunar excursion vehicles as well as the Human and robotic construction of an inflatable telescope array for deployment to the Sun/Earth Libration Point.
NASA Technical Reports Server (NTRS)
Saha, Dipanjan; Lewandowski, Edward J.
2013-01-01
The steady state, nearly sinusoidal behavior of the components in a Free Piston Stirling Engine allows for visualization of the forces in the system using phasor diagrams. Based on Newton's second law, F=ma, any phasor diagrams modeling a given component in a system should close if all of the acting forces have been considered. Since the Advanced Stirling Radioisotope Generator (ASRG), currently being developed for future NASA deep space missions, is made up of such nearly sinusoidally oscillating components, its phasor diagrams would also be expected to close. A graphical user interface (GUI) has been written in MATLAB by taking user input data, passing it to Sage, a 1-D thermodynamic modeling program used to model the Stirling convertor, running Sage and then automatically plotting the phasor diagrams. Using this software tool, the effect of varying different Sage inputs on the phasor diagrams was determined. The parameters varied were piston amplitude, hot end temperature, cold end temperature, operating frequency, and displacer spring constant. By using these phasor diagrams, better insight can be gained as to why the convertor operates the way that it does.
Functional Architecture of the Retina: Development and Disease
Hoon, Mrinalini; Okawa, Haruhisa; Santina, Luca Della; Wong, Rachel O.L.
2014-01-01
Structure and function are highly correlated in the vertebrate retina, a sensory tissue that is organized into cell layers with microcircuits working in parallel and together to encode visual information. All vertebrate retinas share a fundamental plan, comprising five major neuronal cell classes with cell body distributions and connectivity arranged in stereotypic patterns. Conserved features in retinal design have enabled detailed analysis and comparisons of structure, connectivity and function across species. Each species, however, can adopt structural and/or functional retinal specializations, implementing variations to the basic design in order to satisfy unique requirements in visual function. Recent advances in molecular tools, imaging and electrophysiological approaches have greatly facilitated identification of the cellular and molecular mechanisms that establish the fundamental organization of the retina and the specializations of its microcircuits during development. Here, we review advances in our understanding of how these mechanisms act to shape structure and function at the single cell level, to coordinate the assembly of cell populations, and to define their specific circuitry. We also highlight how structure is rearranged and function is disrupted in disease, and discuss current approaches to re-establish the intricate functional architecture of the retina. PMID:24984227
Functional architecture of the retina: development and disease.
Hoon, Mrinalini; Okawa, Haruhisa; Della Santina, Luca; Wong, Rachel O L
2014-09-01
Structure and function are highly correlated in the vertebrate retina, a sensory tissue that is organized into cell layers with microcircuits working in parallel and together to encode visual information. All vertebrate retinas share a fundamental plan, comprising five major neuronal cell classes with cell body distributions and connectivity arranged in stereotypic patterns. Conserved features in retinal design have enabled detailed analysis and comparisons of structure, connectivity and function across species. Each species, however, can adopt structural and/or functional retinal specializations, implementing variations to the basic design in order to satisfy unique requirements in visual function. Recent advances in molecular tools, imaging and electrophysiological approaches have greatly facilitated identification of the cellular and molecular mechanisms that establish the fundamental organization of the retina and the specializations of its microcircuits during development. Here, we review advances in our understanding of how these mechanisms act to shape structure and function at the single cell level, to coordinate the assembly of cell populations, and to define their specific circuitry. We also highlight how structure is rearranged and function is disrupted in disease, and discuss current approaches to re-establish the intricate functional architecture of the retina. Copyright © 2014 Elsevier Ltd. All rights reserved.
Martinez, Ramon; Ordunez, Pedro; Soliz, Patricia N; Ballesteros, Michael F
2016-01-01
Background The complexity of current injury-related health issues demands the usage of diverse and massive data sets for comprehensive analyses, and application of novel methods to communicate data effectively to the public health community, decision-makers and the public. Recent advances in information visualisation, availability of new visual analytic methods and tools, and progress on information technology provide an opportunity for shaping the next generation of injury surveillance. Objective To introduce data visualisation conceptual bases, and propose a visual analytic and visualisation platform in public health surveillance for injury prevention and control. Methods The paper introduces data visualisation conceptual bases, describes a visual analytic and visualisation platform, and presents two real-world case studies illustrating their application in public health surveillance for injury prevention and control. Results Application of visual analytic and visualisation platform is presented as solution for improved access to heterogeneous data sources, enhance data exploration and analysis, communicate data effectively, and support decision-making. Conclusions Applications of data visualisation concepts and visual analytic platform could play a key role to shape the next generation of injury surveillance. Visual analytic and visualisation platform could improve data use, the analytic capacity, and ability to effectively communicate findings and key messages. The public health surveillance community is encouraged to identify opportunities to develop and expand its use in injury prevention and control. PMID:26728006
FISH Oracle: a web server for flexible visualization of DNA copy number data in a genomic context.
Mader, Malte; Simon, Ronald; Steinbiss, Sascha; Kurtz, Stefan
2011-07-28
The rapidly growing amount of array CGH data requires improved visualization software supporting the process of identifying candidate cancer genes. Optimally, such software should work across multiple microarray platforms, should be able to cope with data from different sources and should be easy to operate. We have developed a web-based software FISH Oracle to visualize data from multiple array CGH experiments in a genomic context. Its fast visualization engine and advanced web and database technology supports highly interactive use. FISH Oracle comes with a convenient data import mechanism, powerful search options for genomic elements (e.g. gene names or karyobands), quick navigation and zooming into interesting regions, and mechanisms to export the visualization into different high quality formats. These features make the software especially suitable for the needs of life scientists. FISH Oracle offers a fast and easy to use visualization tool for array CGH and SNP array data. It allows for the identification of genomic regions representing minimal common changes based on data from one or more experiments. FISH Oracle will be instrumental to identify candidate onco and tumor suppressor genes based on the frequency and genomic position of DNA copy number changes. The FISH Oracle application and an installed demo web server are available at http://www.zbh.uni-hamburg.de/fishoracle.
FISH Oracle: a web server for flexible visualization of DNA copy number data in a genomic context
2011-01-01
Background The rapidly growing amount of array CGH data requires improved visualization software supporting the process of identifying candidate cancer genes. Optimally, such software should work across multiple microarray platforms, should be able to cope with data from different sources and should be easy to operate. Results We have developed a web-based software FISH Oracle to visualize data from multiple array CGH experiments in a genomic context. Its fast visualization engine and advanced web and database technology supports highly interactive use. FISH Oracle comes with a convenient data import mechanism, powerful search options for genomic elements (e.g. gene names or karyobands), quick navigation and zooming into interesting regions, and mechanisms to export the visualization into different high quality formats. These features make the software especially suitable for the needs of life scientists. Conclusions FISH Oracle offers a fast and easy to use visualization tool for array CGH and SNP array data. It allows for the identification of genomic regions representing minimal common changes based on data from one or more experiments. FISH Oracle will be instrumental to identify candidate onco and tumor suppressor genes based on the frequency and genomic position of DNA copy number changes. The FISH Oracle application and an installed demo web server are available at http://www.zbh.uni-hamburg.de/fishoracle. PMID:21884636
VCS: Tool for Visualizing Copy Number Variation and Single Nucleotide Polymorphism.
Kim, HyoYoung; Sung, Samsun; Cho, Seoae; Kim, Tae-Hun; Seo, Kangseok; Kim, Heebal
2014-12-01
Copy number variation (CNV) or single nucleotide phlyorphism (SNP) is useful genetic resource to aid in understanding complex phenotypes or deseases susceptibility. Although thousands of CNVs and SNPs are currently avaliable in the public databases, they are somewhat difficult to use for analyses without visualization tools. We developed a web-based tool called the VCS (visualization of CNV or SNP) to visualize the CNV or SNP detected. The VCS tool can assist to easily interpret a biological meaning from the numerical value of CNV and SNP. The VCS provides six visualization tools: i) the enrichment of genome contents in CNV; ii) the physical distribution of CNV or SNP on chromosomes; iii) the distribution of log2 ratio of CNVs with criteria of interested; iv) the number of CNV or SNP per binning unit; v) the distribution of homozygosity of SNP genotype; and vi) cytomap of genes within CNV or SNP region.
Experiences in using DISCUS for visualizing human communication
NASA Astrophysics Data System (ADS)
Groehn, Matti; Nieminen, Marko; Haho, Paeivi; Smeds, Riitta
2000-02-01
In this paper, we present further improvement to the DISCUS software that can be used to record and analyze the flow and constants of business process simulation session discussion. The tool was initially introduced in 'visual data exploration and analysis IV' conference. The initial features of the tool enabled the visualization of discussion flow in business process simulation sessions and the creation of SOM analyses. The improvements of the tool consists of additional visualization possibilities that enable quick on-line analyses and improved graphical statistics. We have also created the very first interface to audio data and implemented two ways to visualize it. We also outline additional possibilities to use the tool in other application areas: these include usability testing and the possibility to use the tool for capturing design rationale in a product development process. The data gathered with DISCUS may be used in other applications, and further work may be done with data ming techniques.
Soto, Axel J; Zerva, Chrysoula; Batista-Navarro, Riza; Ananiadou, Sophia
2018-04-15
Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.
Implementing virtual reality interfaces for the geosciences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, W.; Jacobsen, J.; Austin, A.
1996-06-01
For the past few years, a multidisciplinary team of computer and earth scientists at Lawrence Berkeley National Laboratory has been exploring the use of advanced user interfaces, commonly called {open_quotes}Virtual Reality{close_quotes} (VR), coupled with visualization and scientific computing software. Working closely with industry, these efforts have resulted in an environment in which VR technology is coupled with existing visualization and computational tools. VR technology may be thought of as a user interface. It is useful to think of a spectrum, ranging the gamut from command-line interfaces to completely immersive environments. In the former, one uses the keyboard to enter threemore » or six-dimensional parameters. In the latter, three or six-dimensional information is provided by trackers contained either in hand-held devices or attached to the user in some fashion, e.g. attached to a head-mounted display. Rich, extensible and often complex languages are a vehicle whereby the user controls parameters to manipulate object position and location in a virtual world, but the keyboard is the obstacle in that typing is cumbersome, error-prone and typically slow. In the latter, the user can interact with these parameters by means of motor skills which are highly developed. Two specific geoscience application areas will be highlighted. In the first, we have used VR technology to manipulate three-dimensional input parameters, such as the spatial location of injection or production wells in a reservoir simulator. In the second, we demonstrate how VR technology has been used to manipulate visualization tools, such as a tool for computing streamlines via manipulation of a {open_quotes}rake.{close_quotes} The rake is presented to the user in the form of a {open_quotes}virtual well{close_quotes} icon, and provides parameters used by the streamlines algorithm.« less
A neotropical Miocene pollen database employing image-based search and semantic modeling.
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren
2014-08-01
Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.
Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets
Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T.
2011-01-01
Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuroscientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227
MONA – Interactive manipulation of molecule collections
2013-01-01
Working with small‐molecule datasets is a routine task for cheminformaticians and chemists. The analysis and comparison of vendor catalogues and the compilation of promising candidates as starting points for screening campaigns are but a few very common applications. The workflows applied for this purpose usually consist of multiple basic cheminformatics tasks such as checking for duplicates or filtering by physico‐chemical properties. Pipelining tools allow to create and change such workflows without much effort, but usually do not support interventions once the pipeline has been started. In many contexts, however, the best suited workflow is not known in advance, thus making it necessary to take the results of the previous steps into consideration before proceeding. To support intuition‐driven processing of compound collections, we developed MONA, an interactive tool that has been designed to prepare and visualize large small‐molecule datasets. Using an SQL database common cheminformatics tasks such as analysis and filtering can be performed interactively with various methods for visual support. Great care was taken in creating a simple, intuitive user interface which can be instantly used without any setup steps. MONA combines the interactivity of molecule database systems with the simplicity of pipelining tools, thus enabling the case‐to‐case application of chemistry expert knowledge. The current version is available free of charge for academic use and can be downloaded at http://www.zbh.uni‐hamburg.de/mona. PMID:23985157
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...
2017-08-29
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Meteorological Data Visualization in Multi-User Virtual Reality
NASA Astrophysics Data System (ADS)
Appleton, R.; van Maanen, P. P.; Fisher, W. I.; Krijnen, R.
2017-12-01
Due to their complexity and size, visualization of meteorological data is important. It enables the precise examining and reviewing of meteorological details and is used as a communication tool for reporting, education and to demonstrate the importance of the data to policy makers. Specifically for the UCAR community it is important to explore all of such possibilities.Virtual Reality (VR) technology enhances the visualization of volumetric and dynamical data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of the shelf VR hardware enabled us to create a very intuitive and low cost way to visualize meteorological data. A VR viewer has been implemented using multiple HTC Vive head sets and allows visualization and analysis of meteorological data in NetCDF format (e.g. of NCEP North America Model (NAM), see figure). Sources of atmospheric/meteorological data include radar and satellite as well as traditional weather stations. The data includes typical meteorological information such as temperature, humidity, air pressure, as well as those data described by the climate forecast (CF) model conventions (http://cfconventions.org). Other data such as lightning-strike data and ultra-high-resolution satellite data are also becoming available. The users can navigate freely around the data which is presented in a virtual room at a scale of up to 3.5 X 3.5 meters. The multiple users can manipulate the model simultaneously. Possible mutations include scaling/translating, filtering by value and using a slicing tool to cut-off specific sections of the data to get a closer look. The slicing can be done in any direction using the concept of a `virtual knife' in real-time. The users can also scoop out parts of the data and walk though successive states of the model. Future plans are (a.o.) to further improve the performance to a higher update rate (for the reduction of possible motion sickness) and to add more advanced filtering and annotation capabilities. We are looking for cooperation with data owners with use cases such as the above mentioned. This will help in further improving and developing our tool and to broaden its application into other domains.
NASA Astrophysics Data System (ADS)
Messerotti, M.
2009-04-01
Earth and Space Science research, as well as many other disciplines, can nowadays benefit from advanced data handling techniques and tools capable to significantly relieve the scientist of the burden of data search, retrieval, visualization and manipulation, and to exploit the data information content. Some typical examples are Virtual Observatories (VO) specific to a variety of sub-disciplines but anyway interlinked, a feature intrinsic to the VO architecture, Virtual Globes as advanced 3D selection and visualization interfaces to distributed data repositories, and the Global Earth Observation System of Systems. These information systems are proving also effective in education and outreach activities as they are usable via web interfaces to give access to, to display and to download nonhomogeneous datasets in order to raise the awareness of the students and the public on the relevant disciplines. Despite of that, all of this effective machineries are still poorly used both by the scientific community and by the community active in education and outreach. All such infrastructures are designed and developed according to the state-of-the-art information and computer engineering techniques and are provided with top features such as ontology- and semantics-based data management, and advanced unified web-based interfaces. Anyway, a careful analysis of the issue mentioned above indicates a key aspect that play a major role, i.e., the inadequate interaction with the users' communities during the design, the development, the deployment and the test phases. Even the best technical tool can appear inadequate to the final user when it does not meet the user's requirements in terms of achievable goals and use friendliness. In this work, we consider the user-side features to be taken into account for the optimum exploitation of an information system in the framework of the interaction among the design engineers and the target communities towards the setting of a good practice for minimizing the developer-user divide.
The 3D widgets for exploratory scientific visualization
NASA Technical Reports Server (NTRS)
Herndon, Kenneth P.; Meyer, Tom
1995-01-01
Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.
Lotus Base: An integrated information portal for the model legume Lotus japonicus
Mun, Terry; Bachmann, Asger; Gupta, Vikas; Stougaard, Jens; Andersen, Stig U.
2016-01-01
Lotus japonicus is a well-characterized model legume widely used in the study of plant-microbe interactions. However, datasets from various Lotus studies are poorly integrated and lack interoperability. We recognize the need for a comprehensive repository that allows comprehensive and dynamic exploration of Lotus genomic and transcriptomic data. Equally important are user-friendly in-browser tools designed for data visualization and interpretation. Here, we present Lotus Base, which opens to the research community a large, established LORE1 insertion mutant population containing an excess of 120,000 lines, and serves the end-user tightly integrated data from Lotus, such as the reference genome, annotated proteins, and expression profiling data. We report the integration of expression data from the L. japonicus gene expression atlas project, and the development of tools to cluster and export such data, allowing users to construct, visualize, and annotate co-expression gene networks. Lotus Base takes advantage of modern advances in browser technology to deliver powerful data interpretation for biologists. Its modular construction and publicly available application programming interface enable developers to tap into the wealth of integrated Lotus data. Lotus Base is freely accessible at: https://lotus.au.dk. PMID:28008948
Visualizing driving forces of spatially extended systems using the recurrence plot framework
NASA Astrophysics Data System (ADS)
Riedl, Maik; Marwan, Norbert; Kurths, Jürgen
2017-12-01
The increasing availability of highly resolved spatio-temporal data leads to new opportunities as well as challenges in many scientific disciplines such as climatology, ecology or epidemiology. This allows more detailed insights into the investigated spatially extended systems. However, this development needs advanced techniques of data analysis which go beyond standard linear tools since the more precise consideration often reveals nonlinear phenomena, for example threshold effects. One of these tools is the recurrence plot approach which has been successfully applied to the description of complex systems. Using this technique's power of visualization, we propose the analysis of the local minima of the underlying distance matrix in order to display driving forces of spatially extended systems. The potential of this novel idea is demonstrated by the analysis of the chlorophyll concentration and the sea surface temperature in the Southern California Bight. We are able not only to confirm the influence of El Niño events on the phytoplankton growth in this region but also to confirm two discussed regime shifts in the California current system. This new finding underlines the power of the proposed approach and promises new insights into other complex systems.
Survey of Network Visualization Tools
2007-12-01
Dimensionality • 2D Comments: Deployment Type: • Components for tool building • Standalone Tool OS: • Windows Extensibility • ActiveX ...Visual Basic Comments: Interoperability Daisy is fully compliant with Microsoft’s ActiveX , therefore, other Windows based programs can...other functions that improve analytic decision making. Available in ActiveX , C++, Java, and .NET editions. • Tom Sawyer Visualization: Enables you to
Goyal, Anupama A; Tur, Komalpreet; Mann, Jason; Townsend, Whitney; Flanders, Scott A; Chopra, Vineet
2017-11-01
Although common, the impact of low-cost bedside visual tools, such as whiteboards, on patient care is unclear. To systematically review the literature and assess the influence of bedside visual tools on patient satisfaction. Medline, Embase, SCOPUS, Web of Science, CINAHL, and CENTRAL. Studies of adult or pediatric hospitalized patients reporting physician identification, understanding of provider roles, patient-provider communication, and satisfaction with care from the use of visual tools were included. Outcomes were categorized as positive, negative, or neutral based on survey responses for identification, communication, and satisfaction. Two reviewers screened studies, extracted data, and assessed the risk of study bias. Sixteen studies met the inclusion criteria. Visual tools included whiteboards (n = 4), physician pictures (n = 7), whiteboard and picture (n = 1), electronic medical record-based patient portals (n = 3), and formatted notepads (n = 1). Tools improved patients' identification of providers (13/13 studies). The impact on understanding the providers' roles was largely positive (8/10 studies). Visual tools improved patient-provider communication (4/5 studies) and satisfaction (6/8 studies). In adults, satisfaction varied between positive with the use of whiteboards (2/5 studies) and neutral with pictures (1/5 studies). Satisfaction related to pictures in pediatric patients was either positive (1/3 studies) or neutral (1/3 studies). Differences in tool format (individual pictures vs handouts with pictures of all providers) and study design (randomized vs cohort) may explain variable outcomes. The use of bedside visual tools appears to improve patient recognition of providers and patient-provider communication. Future studies that include better design and outcome assessment are necessary before widespread use can be recommended. © 2017 Society of Hospital Medicine
Visual Impairment Screening Assessment (VISA) tool: pilot validation.
Rowe, Fiona J; Hepworth, Lauren R; Hanna, Kerry L; Howard, Claire
2018-03-06
To report and evaluate a new Vision Impairment Screening Assessment (VISA) tool intended for use by the stroke team to improve identification of visual impairment in stroke survivors. Prospective case cohort comparative study. Stroke units at two secondary care hospitals and one tertiary centre. 116 stroke survivors were screened, 62 by naïve and 54 by non-naïve screeners. Both the VISA screening tool and the comprehensive specialist vision assessment measured case history, visual acuity, eye alignment, eye movements, visual field and visual inattention. Full completion of VISA tool and specialist vision assessment was achieved for 89 stroke survivors. Missing data for one or more sections typically related to patient's inability to complete the assessment. Sensitivity and specificity of the VISA screening tool were 90.24% and 85.29%, respectively; the positive and negative predictive values were 93.67% and 78.36%, respectively. Overall agreement was significant; k=0.736. Lowest agreement was found for screening of eye movement and visual inattention deficits. This early validation of the VISA screening tool shows promise in improving detection accuracy for clinicians involved in stroke care who are not specialists in vision problems and lack formal eye training, with potential to lead to more prompt referral with fewer false positives and negatives. Pilot validation indicates acceptability of the VISA tool for screening of visual impairment in stroke survivors. Sensitivity and specificity were high indicating the potential accuracy of the VISA tool for screening purposes. Results of this study have guided the revision of the VISA screening tool ahead of full clinical validation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Audio-video decision support for patients: the documentary genré as a basis for decision aids.
Volandes, Angelo E; Barry, Michael J; Wood, Fiona; Elwyn, Glyn
2013-09-01
Decision support tools are increasingly using audio-visual materials. However, disagreement exists about the use of audio-visual materials as they may be subjective and biased. This is a literature review of the major texts for documentary film studies to extrapolate issues of objectivity and bias from film to decision support tools. The key features of documentary films are that they attempt to portray real events and that the attempted reality is always filtered through the lens of the filmmaker. The same key features can be said of decision support tools that use audio-visual materials. Three concerns arising from documentary film studies as they apply to the use of audio-visual materials in decision support tools include whose perspective matters (stakeholder bias), how to choose among audio-visual materials (selection bias) and how to ensure objectivity (editorial bias). Decision science needs to start a debate about how audio-visual materials are to be used in decision support tools. Simply because audio-visual materials may be subjective and open to bias does not mean that we should not use them. Methods need to be found to ensure consensus around balance and editorial control, such that audio-visual materials can be used. © 2011 John Wiley & Sons Ltd.
Audio‐video decision support for patients: the documentary genré as a basis for decision aids
Volandes, Angelo E.; Barry, Michael J.; Wood, Fiona; Elwyn, Glyn
2011-01-01
Abstract Objective Decision support tools are increasingly using audio‐visual materials. However, disagreement exists about the use of audio‐visual materials as they may be subjective and biased. Methods This is a literature review of the major texts for documentary film studies to extrapolate issues of objectivity and bias from film to decision support tools. Results The key features of documentary films are that they attempt to portray real events and that the attempted reality is always filtered through the lens of the filmmaker. The same key features can be said of decision support tools that use audio‐visual materials. Three concerns arising from documentary film studies as they apply to the use of audio‐visual materials in decision support tools include whose perspective matters (stakeholder bias), how to choose among audio‐visual materials (selection bias) and how to ensure objectivity (editorial bias). Discussion Decision science needs to start a debate about how audio‐visual materials are to be used in decision support tools. Simply because audio‐visual materials may be subjective and open to bias does not mean that we should not use them. Conclusion Methods need to be found to ensure consensus around balance and editorial control, such that audio‐visual materials can be used. PMID:22032516
The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization
NASA Astrophysics Data System (ADS)
Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.
2003-12-01
The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.
Visual disturbances in advanced cancer patients: clinical observations.
Saita, L; Polastri, D; De Conno, F
1999-03-01
Visual disturbances in advanced cancer patients are very rarely signaled, evaluated, or adequately treated. The main causes of sight disturbances are primary eye tumors, ocular metastases, and some paraneoplastic syndromes. Sight alteration can also be associated with asthenia, fatigue, anemia, and hypovitaminosis. These symptoms can be monocular or binocular, and their gravity and evolution can vary. Based on a survey of 156 patients, we estimate the prevalence of visual disturbances to be 12% in advanced cancer patients.
Northwestern University Schizophrenia Data and Software Tool (NUSDAST)
Wang, Lei; Kogan, Alex; Cobia, Derin; Alpert, Kathryn; Kolasny, Anthony; Miller, Michael I.; Marcus, Daniel
2013-01-01
The schizophrenia research community has invested substantial resources on collecting, managing and sharing large neuroimaging datasets. As part of this effort, our group has collected high resolution magnetic resonance (MR) datasets from individuals with schizophrenia, their non-psychotic siblings, healthy controls and their siblings. This effort has resulted in a growing resource, the Northwestern University Schizophrenia Data and Software Tool (NUSDAST), an NIH-funded data sharing project to stimulate new research. This resource resides on XNAT Central, and it contains neuroimaging (MR scans, landmarks and surface maps for deep subcortical structures, and FreeSurfer cortical parcellation and measurement data), cognitive (cognitive domain scores for crystallized intelligence, working memory, episodic memory, and executive function), clinical (demographic, sibling relationship, SAPS and SANS psychopathology), and genetic (20 polymorphisms) data, collected from more than 450 subjects, most with 2-year longitudinal follow-up. A neuroimaging mapping, analysis and visualization software tool, CAWorks, is also part of this resource. Moreover, in making our existing neuroimaging data along with the associated meta-data and computational tools publically accessible, we have established a web-based information retrieval portal that allows the user to efficiently search the collection. This research-ready dataset meaningfully combines neuroimaging data with other relevant information, and it can be used to help facilitate advancing neuroimaging research. It is our hope that this effort will help to overcome some of the commonly recognized technical barriers in advancing neuroimaging research such as lack of local organization and standard descriptions. PMID:24223551
Northwestern University Schizophrenia Data and Software Tool (NUSDAST).
Wang, Lei; Kogan, Alex; Cobia, Derin; Alpert, Kathryn; Kolasny, Anthony; Miller, Michael I; Marcus, Daniel
2013-01-01
The schizophrenia research community has invested substantial resources on collecting, managing and sharing large neuroimaging datasets. As part of this effort, our group has collected high resolution magnetic resonance (MR) datasets from individuals with schizophrenia, their non-psychotic siblings, healthy controls and their siblings. This effort has resulted in a growing resource, the Northwestern University Schizophrenia Data and Software Tool (NUSDAST), an NIH-funded data sharing project to stimulate new research. This resource resides on XNAT Central, and it contains neuroimaging (MR scans, landmarks and surface maps for deep subcortical structures, and FreeSurfer cortical parcellation and measurement data), cognitive (cognitive domain scores for crystallized intelligence, working memory, episodic memory, and executive function), clinical (demographic, sibling relationship, SAPS and SANS psychopathology), and genetic (20 polymorphisms) data, collected from more than 450 subjects, most with 2-year longitudinal follow-up. A neuroimaging mapping, analysis and visualization software tool, CAWorks, is also part of this resource. Moreover, in making our existing neuroimaging data along with the associated meta-data and computational tools publically accessible, we have established a web-based information retrieval portal that allows the user to efficiently search the collection. This research-ready dataset meaningfully combines neuroimaging data with other relevant information, and it can be used to help facilitate advancing neuroimaging research. It is our hope that this effort will help to overcome some of the commonly recognized technical barriers in advancing neuroimaging research such as lack of local organization and standard descriptions.
Experience Report: Visual Programming in the Real World
NASA Technical Reports Server (NTRS)
Baroth, E.; Hartsough, C
1994-01-01
This paper reports direct experience with two commercial, widely used visual programming environments. While neither of these systems is object oriented, the tools have transformed the development process and indicate a direction for visual object oriented tools to proceed.
Ghandikota, Sudhir; Hershey, Gurjit K Khurana; Mersha, Tesfaye B
2018-03-24
Advances in high-throughput sequencing technologies have made it possible to generate multiple omics data at an unprecedented rate and scale. The accumulation of these omics data far outpaces the rate at which biologists can mine and generate new hypothesis to test experimentally. There is an urgent need to develop a myriad of powerful tools to efficiently and effectively search and filter these resources to address specific post-GWAS functional genomics questions. However, to date, these resources are scattered across several databases and often lack a unified portal for data annotation and analytics. In addition, existing tools to analyze and visualize these databases are highly fragmented, resulting researchers to access multiple applications and manual interventions for each gene or variant in an ad hoc fashion until all the questions are answered. In this study, we present GENEASE, a web-based one-stop bioinformatics tool designed to not only query and explore multi-omics and phenotype databases (e.g., GTEx, ClinVar, dbGaP, GWAS Catalog, ENCODE, Roadmap Epigenomics, KEGG, Reactome, Gene and Phenotype Ontology) in a single web interface but also to perform seamless post genome-wide association downstream functional and overlap analysis for non-coding regulatory variants. GENEASE accesses over 50 different databases in public domain including model organism-specific databases to facilitate gene/variant and disease exploration, enrichment and overlap analysis in real time. It is a user-friendly tool with point-and-click interface containing links for support information including user manual and examples. GENEASE can be accessed freely at http://research.cchmc.org/mershalab/genease_new/login.html. Tesfaye.Mersha@cchmc.org, Sudhir.Ghandikota@cchmc.org. Supplementary data are available at Bioinformatics online.
Low-Cost Air Quality Monitoring Tools: From Research to Practice (A Workshop Summary)
Griswold, William G.; RS, Abhijit; Johnston, Jill E.; Herting, Megan M.; Thorson, Jacob; Collier-Oxandale, Ashley; Hannigan, Michael
2017-01-01
In May 2017, a two-day workshop was held in Los Angeles (California, U.S.A.) to gather practitioners who work with low-cost sensors used to make air quality measurements. The community of practice included individuals from academia, industry, non-profit groups, community-based organizations, and regulatory agencies. The group gathered to share knowledge developed from a variety of pilot projects in hopes of advancing the collective knowledge about how best to use low-cost air quality sensors. Panel discussion topics included: (1) best practices for deployment and calibration of low-cost sensor systems, (2) data standardization efforts and database design, (3) advances in sensor calibration, data management, and data analysis and visualization, and (4) lessons learned from research/community partnerships to encourage purposeful use of sensors and create change/action. Panel discussions summarized knowledge advances and project successes while also highlighting the questions, unresolved issues, and technological limitations that still remain within the low-cost air quality sensor arena. PMID:29143775
ERIC Educational Resources Information Center
Liang, Hai-Ning; Sedig, Kamran
2010-01-01
Many students find it difficult to engage with mathematical concepts. As a relatively new class of learning tools, visualization tools may be able to promote higher levels of engagement with mathematical concepts. Often, development of new tools may outpace empirical evaluations of the effectiveness of these tools, especially in educational…
An optimized web-based approach for collaborative stereoscopic medical visualization
Kaspar, Mathias; Parsad, Nigel M; Silverstein, Jonathan C
2013-01-01
Objective Medical visualization tools have traditionally been constrained to tethered imaging workstations or proprietary client viewers, typically part of hospital radiology systems. To improve accessibility to real-time, remote, interactive, stereoscopic visualization and to enable collaboration among multiple viewing locations, we developed an open source approach requiring only a standard web browser with no added client-side software. Materials and Methods Our collaborative, web-based, stereoscopic, visualization system, CoWebViz, has been used successfully for the past 2 years at the University of Chicago to teach immersive virtual anatomy classes. It is a server application that streams server-side visualization applications to client front-ends, comprised solely of a standard web browser with no added software. Results We describe optimization considerations, usability, and performance results, which make CoWebViz practical for broad clinical use. We clarify technical advances including: enhanced threaded architecture, optimized visualization distribution algorithms, a wide range of supported stereoscopic presentation technologies, and the salient theoretical and empirical network parameters that affect our web-based visualization approach. Discussion The implementations demonstrate usability and performance benefits of a simple web-based approach for complex clinical visualization scenarios. Using this approach overcomes technical challenges that require third-party web browser plug-ins, resulting in the most lightweight client. Conclusions Compared to special software and hardware deployments, unmodified web browsers enhance remote user accessibility to interactive medical visualization. Whereas local hardware and software deployments may provide better interactivity than remote applications, our implementation demonstrates that a simplified, stable, client approach using standard web browsers is sufficient for high quality three-dimensional, stereoscopic, collaborative and interactive visualization. PMID:23048008
The VAST Challenge: History, Scope, and Outcomes: An introduction to the Special Issue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kristin A.; Grinstein, Georges; Whiting, Mark A.
2014-10-01
Visual analytics aims to facilitate human insight from complex data via a combination of visual representations, interaction techniques, and supporting algorithms. To create new tools and techniques that achieve this goal requires that researchers have an understanding of analytical questions to be addressed, data that illustrates the complexities and ambiguities found in realistic analytic settings, and methods for evaluating whether the plausible insights are gained through use of the new methods. However, researchers do not, generally speaking, have access to analysts who can articulate their problems or operational data that is used for analysis. To fill this gap, the Visualmore » Analytics Science and Technology (VAST) Challenge has been held annually since 2006. The VAST Challenge provides an opportunity for researchers to experiment with realistic but not real problems, using realistic synthetic data with known events embedded. Since its inception, the VAST Challenge has evolved along with the visual analytics research community to pose more complex challenges, ranging from text analysis to video analysis to large scale network log analysis. The seven years of the VAST Challenge have seen advancements in research and development, education, evaluation, and in the challenge process itself. This special issue of Information Visualization highlights some of the noteworthy advancements in each of these areas. Some of these papers focus on important research questions related to the challenge itself, and other papers focus on innovative research that has been shaped by participation in the challenge. This paper describes the VAST Challenge process and benefits in detail. It also provides an introduction to and context for the remaining papers in the issue.« less
The visualization and availability of experimental research data at Elsevier
NASA Astrophysics Data System (ADS)
Keall, Bethan
2014-05-01
In the digital age, the visualization and availability of experimental research data is an increasingly prominent aspect of the research process and of the scientific output that researchers generate. We expect that the importance of data will continue to grow, driven by technological advancements, requirements from funding bodies to make research data available, and a developing research data infrastructure that is supported by data repositories, science publishers, and other stakeholders. Elsevier is actively contributing to these efforts, for example by setting up bidirectional links between online articles on ScienceDirect and relevant data sets on trusted data repositories. A key aspect of Elsevier's "Article of the Future" program, these links enrich the online article and make it easier for researchers to find relevant data and articles and help place data in the right context for re-use. Recently, we have set up such links with some of the leading data repositories in Earth Sciences, including the British Geological Survey, Integrated Earth Data Applications, the UK Natural Environment Research Council, and the Oak Ridge National Laboratory DAAC. Building on these links, Elsevier has also developed a number of data integration and visualization tools, such as an interactive map viewer that displays the locations of relevant data from PANGAEA next to articles on ScienceDirect. In this presentation we will give an overview of these and other capabilities of the Article of the Future, focusing on how they help advance communication of research in the digital age.
Fujioka, Toru; Takiguchi, Shinichiro; Yatsuga, Chiho; Hiratani, Michio; Hong, Kang-E M; Shin, Min-Sup; Cho, Sungzoon; Kosaka, Hirotaka; Tomoda, Akemi
2016-01-01
Objective This study was conducted to validate the Advanced Test of Attention (ATA) of the visual attention version of Japanese children with attention deficit/hyperactivity disorder (ADHD) and to evaluate the efficacy of methylphenidate (OROS-MPH) and atomoxetine medications. Methods To assess pharmacotherapy efficacy, the visual version of ATA was administered to 42 children with ADHD. Results were assessed using discriminant analysis, ANOVA for indices of ATA before and after medication treatment, and correlation analysis between the improvement of indices of ATA and clinical symptoms during medication treatment. Results Discriminant analysis showed that 69.0% of ADHD children were assigned correctly. The T score of commission errors increased as the trial progressed on the medication-off condition. T scores of commission errors and standard deviation of response times on medication-on condition were low compared to the medication-off condition. A few significant correlations were found between the improvements of indices of ATA and ADHD-Rating Scale (RS) during treatment. Conclusion The performance of the visual version of ATA on medication-off condition reflected the features of ADHD. Furthermore, the medication treatment effects were confirmed sufficiently. In addition, results suggest that indices of ATA reflected aspects of ADHD symptoms that are difficult to elucidate for ADHD-RS. For assessing symptoms and effects of medical treatment in children with ADHD, ATA might be a useful assessment tool. PMID:26792044
Got Graphs? An Assessment of Data Visualization Tools
NASA Technical Reports Server (NTRS)
Schaefer, C. M.; Foy, M.
2015-01-01
Graphs are powerful tools for simplifying complex data. They are useful for quickly assessing patterns and relationships among one or more variables from a dataset. As the amount of data increases, it becomes more difficult to visualize potential associations. Lifetime Surveillance of Astronaut Health (LSAH) was charged with assessing its current visualization tools along with others on the market to determine whether new tools would be useful for supporting NASA's occupational surveillance effort. It was concluded by members of LSAH that the current tools hindered their ability to provide quick results to researchers working with the department. Due to the high volume of data requests and the many iterations of visualizations requested by researchers, software with a better ability to replicate graphs and edit quickly could improve LSAH's efficiency and lead to faster research results.
Using component technologies for web based wavelet enhanced mammographic image visualization.
Sakellaropoulos, P; Costaridou, L; Panayiotakis, G
2000-01-01
The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.
Poppenga, Sandra K.; Evans, Gayla; Gesch, Dean; Stoker, Jason M.; Queija, Vivian R.; Worstell, Bruce; Tyler, Dean J.; Danielson, Jeff; Bliss, Norman; Greenlee, Susan
2010-01-01
The mission of U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center Topographic Science is to establish partnerships and conduct research and applications that facilitate the development and use of integrated national and global topographic datasets. Topographic Science includes a wide range of research and applications that result in improved seamless topographic datasets, advanced elevation technology, data integration and terrain visualization, new and improved elevation derivatives, and development of Web-based tools. In cooperation with our partners, Topographic Science is developing integrated-science applications for mapping, national natural resource initiatives, hazards, and global change science. http://topotools.cr.usgs.gov/.
Synchronized Phasor Data for Analyzing Wind Power Plant Dynamic Behavior and Model Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Y. H.
2013-01-01
The U.S. power industry is undertaking several initiatives that will improve the operations of the power grid. One of those is the implementation of 'wide area measurements' using phasor measurement units (PMUs) to dynamically monitor the operations and the status of the network and provide advanced situational awareness and stability assessment. This project seeks to obtain PMU data from wind power plants and grid reference points and develop software tools to analyze and visualize synchrophasor data for the purpose of better understanding wind power plant dynamic behaviors under normal and contingency conditions.
Telemedicine with mobile devices and augmented reality for early postoperative care.
Ponce, Brent A; Brabston, Eugene W; Shin Zu; Watson, Shawna L; Baker, Dustin; Winn, Dennis; Guthrie, Barton L; Shenai, Mahesh B
2016-08-01
Advanced features are being added to telemedicine paradigms to enhance usability and usefulness. Virtual Interactive Presence (VIP) is a technology that allows a surgeon and patient to interact in a "merged reality" space, to facilitate both verbal, visual, and manual interaction. In this clinical study, a mobile VIP iOS application was introduced into routine post-operative orthopedic and neurosurgical care. Survey responses endorse the usefulness of this tool, as it relates to The virtual interaction provides needed virtual follow-up in instances where in-person follow-up may be limited, and enhances the subjective patient experience.
The 2002 NASA Faculty Fellowship Program Research Reports
NASA Technical Reports Server (NTRS)
Bland, J. (Compiler)
2003-01-01
Contents include the following: System Identification of X-33. Neural Network Advanced Ceramic Technology for Space Applications at NASA MSFC. Developing a MATLAB-Based Tool for Visualization and Transformation. Subsurface Stress Fields in Single Crystal (Anisotropic). Contacts Our Space Future: A Challenge to the Conceptual Artist Concept Art for Presentation and Education. Identification and Characterization of Extremophile Microorganisms. Significant to Astrobiology. Mathematical Investigation of Gamma Ray and Neutron. Absorption Grid Patterns for Homeland Defense-Related Fourier Imaging Systems. The Potential of Microwave Radiation for Processing Martian Soil. Fuzzy Logic Trajectory Design and Guidance for Terminal Area.
Tools for visually exploring biological networks.
Suderman, Matthew; Hallett, Michael
2007-10-15
Many tools exist for visually exploring biological networks including well-known examples such as Cytoscape, VisANT, Pathway Studio and Patika. These systems play a key role in the development of integrative biology, systems biology and integrative bioinformatics. The trend in the development of these tools is to go beyond 'static' representations of cellular state, towards a more dynamic model of cellular processes through the incorporation of gene expression data, subcellular localization information and time-dependent behavior. We provide a comprehensive review of the relative advantages and disadvantages of existing systems with two goals in mind: to aid researchers in efficiently identifying the appropriate existing tools for data visualization; to describe the necessary and realistic goals for the next generation of visualization tools. In view of the first goal, we provide in the Supplementary Material a systematic comparison of more than 35 existing tools in terms of over 25 different features. Supplementary data are available at Bioinformatics online.
Visual quality assessment of alternative silvicultural practices in upland hardwood management
Tim McDonald; Bryce Stokes
1997-01-01
Visual impacts of forest operations are of increasing concern to forest managers. Tools are available for evaluating, and potentially avoiding, problems in visual quality resulting from poorly designed harvest unit boundaries. One of these visualization tools is applied in comparing various harvest unit shape alternatives in an upland hardwood stand on steeply sloping...
Application of Frameworks in the Analysis and (Re)design of Interactive Visual Learning Tools
ERIC Educational Resources Information Center
Liang, Hai-Ning; Sedig, Kamran
2009-01-01
Interactive visual learning tools (IVLTs) are software environments that encode and display information visually and allow learners to interact with the visual information. This article examines the application and utility of frameworks in the analysis and design of IVLTs at the micro level. Frameworks play an important role in any design. They…
Examining Chemistry Students Visual-Perceptual Skills Using the VSCS Tool and Interview Data
ERIC Educational Resources Information Center
Christian, Caroline
2010-01-01
The Visual-Spatial Chemistry Specific (VSCS) assessment tool was developed to test students' visual-perceptual skills, which are required to form a mental image of an object. The VSCS was designed around the theoretical framework of Rochford and Archer that provides eight distinct and well-defined visual-perceptual skills with identified problems…
Texture and art with deep neural networks.
Gatys, Leon A; Ecker, Alexander S; Bethge, Matthias
2017-10-01
Although the study of biological vision and computer vision attempt to understand powerful visual information processing from different angles, they have a long history of informing each other. Recent advances in texture synthesis that were motivated by visual neuroscience have led to a substantial advance in image synthesis and manipulation in computer vision using convolutional neural networks (CNNs). Here, we review these recent advances and discuss how they can in turn inspire new research in visual perception and computational neuroscience. Copyright © 2017. Published by Elsevier Ltd.
Hardisty, Frank; Robinson, Anthony C.
2010-01-01
In this paper we present the GeoViz Toolkit, an open-source, internet-delivered program for geographic visualization and analysis that features a diverse set of software components which can be flexibly combined by users who do not have programming expertise. The design and architecture of the GeoViz Toolkit allows us to address three key research challenges in geovisualization: allowing end users to create their own geovisualization and analysis component set on-the-fly, integrating geovisualization methods with spatial analysis methods, and making geovisualization applications sharable between users. Each of these tasks necessitates a robust yet flexible approach to inter-tool coordination. The coordination strategy we developed for the GeoViz Toolkit, called Introspective Observer Coordination, leverages and combines key advances in software engineering from the last decade: automatic introspection of objects, software design patterns, and reflective invocation of methods. PMID:21731423
Hyperspectral imaging for non-contact analysis of forensic traces.
Edelman, G J; Gaston, E; van Leeuwen, T G; Cullen, P J; Aalders, M C G
2012-11-30
Hyperspectral imaging (HSI) integrates conventional imaging and spectroscopy, to obtain both spatial and spectral information from a specimen. This technique enables investigators to analyze the chemical composition of traces and simultaneously visualize their spatial distribution. HSI offers significant potential for the detection, visualization, identification and age estimation of forensic traces. The rapid, non-destructive and non-contact features of HSI mark its suitability as an analytical tool for forensic science. This paper provides an overview of the principles, instrumentation and analytical techniques involved in hyperspectral imaging. We describe recent advances in HSI technology motivating forensic science applications, e.g. the development of portable and fast image acquisition systems. Reported forensic science applications are reviewed. Challenges are addressed, such as the analysis of traces on backgrounds encountered in casework, concluded by a summary of possible future applications. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
The pH ruler: a Java applet for developing interactive exercises on acids and bases.
Barrette-Ng, Isabelle H
2011-07-01
In introductory biochemistry courses, it is often a struggle to teach the basic concepts of acid-base chemistry in a manner that is relevant to biological systems. To help students gain a more intuitive and visual understanding of abstract acid-base concepts, a simple graphical construct called the pH ruler Java applet was developed. The applet allows students to visualize the abundance of different protonation states of diprotic and triprotic amino acids at different pH values. Using the applet, the student can drag a widget on a slider bar to change the pH and observe in real time changes in the abundance of different ionization states of this amino acid. This tool provides a means for developing more complex inquiry-based, active-learning exercises to teach more advanced topics of biochemistry, such as protein purification, protein structure and enzyme mechanism.
Boscardin, Christy; Fergus, Kirkpatrick B; Hellevig, Bonnie; Hauer, Karen E
2017-11-09
Easily accessible and interpretable performance data constitute critical feedback for learners that facilitate informed self-assessment and learning planning. To provide this feedback, there has been a proliferation of educational dashboards in recent years. An educational (learner) dashboard systematically delivers timely and continuous feedback on performance and can provide easily visualized and interpreted performance data. In this paper, we provide practical tips for developing a functional, user-friendly individual learner performance dashboard and literature review of dashboard development, assessment theory, and users' perspectives. Considering key design principles and maximizing current technological advances in data visualization techniques can increase dashboard utility and enhance the user experience. By bridging current technology with assessment strategies that support learning, educators can continue to improve the field of learning analytics and design of information management tools such as dashboards in support of improved learning outcomes.
Božičević, Alen; Dobrzyński, Maciej; De Bie, Hans; Gafner, Frank; Garo, Eliane; Hamburger, Matthias
2017-12-05
The technological development of LC-MS instrumentation has led to significant improvements of performance and sensitivity, enabling high-throughput analysis of complex samples, such as plant extracts. Most software suites allow preprocessing of LC-MS chromatograms to obtain comprehensive information on single constituents. However, more advanced processing needs, such as the systematic and unbiased comparative metabolite profiling of large numbers of complex LC-MS chromatograms remains a challenge. Currently, users have to rely on different tools to perform such data analyses. We developed a two-step protocol comprising a comparative metabolite profiling tool integrated in ACD/MS Workbook Suite, and a web platform developed in R language designed for clustering and visualization of chromatographic data. Initially, all relevant chromatographic and spectroscopic data (retention time, molecular ions with the respective ion abundance, and sample names) are automatically extracted and assembled in an Excel spreadsheet. The file is then loaded into an online web application that includes various statistical algorithms and provides the user with tools to compare and visualize the results in intuitive 2D heatmaps. We applied this workflow to LC-ESIMS profiles obtained from 69 honey samples. Within few hours of calculation with a standard PC, honey samples were preprocessed and organized in clusters based on their metabolite profile similarities, thereby highlighting the common metabolite patterns and distributions among samples. Implementation in the ACD/Laboratories software package enables ulterior integration of other analytical data, and in silico prediction tools for modern drug discovery.
NASA Astrophysics Data System (ADS)
Basista, A.
2013-12-01
There are many tools to manage spatial data. They called Geographic Information System (GIS), which apart from data visualization in space, let users make various spatial analysis. Thanks to them, it is possible to obtain more, essential information for real estate market analysis. Many scientific research present GIS exploitation to future mass valuation, because it is necessary to use advanced tools to manage such a huge real estates' data sets gathered for mass valuation needs. In practice, appraisers use rarely these tools for single valuation, because there are not many available GIS tools to support real estate valuation. The paper presents the functionality of geoinformatic subsystem, that is used to support real estate market analysis and real estate valuation. There are showed a detailed description of the process relied to attributes' inputting into the database and the attributes' values calculation based on the proposed definition of attributes' scales. This work presents also the algorithm of similar properties selection that was implemented within the described subsystem. The main stage of this algorithm is the calculation of the price creative indicator for each real estate, using their attributes' values. The set of properties, chosen in this way, are visualized on the map. The geoinformatic subsystem is used for the un-built real estates and living premises. Geographic Information System software was used to worked out this project. The basic functionality of gvSIG software (open source software) was extended and some extra functions were added to support real estate market analysis.
ASCI visualization tool evaluation, Version 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kegelmeyer, P.
1997-04-01
The charter of the ASCI Visualization Common Tools subgroup was to investigate and evaluate 3D scientific visualization tools. As part of that effort, a Tri-Lab evaluation effort was launched in February of 1996. The first step was to agree on a thoroughly documented list of 32 features against which all tool candidates would be evaluated. These evaluation criteria were both gleaned from a user survey and determined from informed extrapolation into the future, particularly as concerns the 3D nature and extremely large size of ASCI data sets. The second step was to winnow a field of 41 candidate tools downmore » to 11. The selection principle was to be as inclusive as practical, retaining every tool that seemed to hold any promise of fulfilling all of ASCI`s visualization needs. These 11 tools were then closely investigated by volunteer evaluators distributed across LANL, LLNL, and SNL. This report contains the results of those evaluations, as well as a discussion of the evaluation philosophy and criteria.« less
Guner, Huseyin; Close, Patrick L; Cai, Wenxuan; Zhang, Han; Peng, Ying; Gregorich, Zachery R; Ge, Ying
2014-03-01
The rapid advancements in mass spectrometry (MS) instrumentation, particularly in Fourier transform (FT) MS, have made the acquisition of high-resolution and high-accuracy mass measurements routine. However, the software tools for the interpretation of high-resolution MS data are underdeveloped. Although several algorithms for the automatic processing of high-resolution MS data are available, there is still an urgent need for a user-friendly interface with functions that allow users to visualize and validate the computational output. Therefore, we have developed MASH Suite, a user-friendly and versatile software interface for processing high-resolution MS data. MASH Suite contains a wide range of features that allow users to easily navigate through data analysis, visualize complex high-resolution MS data, and manually validate automatically processed results. Furthermore, it provides easy, fast, and reliable interpretation of top-down, middle-down, and bottom-up MS data. MASH Suite is convenient, easily operated, and freely available. It can greatly facilitate the comprehensive interpretation and validation of high-resolution MS data with high accuracy and reliability.
NASA Astrophysics Data System (ADS)
gochis, David; hooper, Rick; parodi, Antonio; Jha, Shantenu; Yu, Wei; Zaslavsky, Ilya; Ganapati, Dinesh
2014-05-01
The community WRF-Hydro system is currently being used in a variety of flood prediction and regional hydroclimate impacts assessment applications around the world. Despite its increasingly wide use certain cyberinfrastructure bottlenecks exist in the setup, execution and post-processing of WRF-Hydro model runs. These bottlenecks result in wasted time, labor, data transfer bandwidth and computational resource use. Appropriate development and use of cyberinfrastructure to setup and manage WRF-Hydro modeling applications will streamline the entire workflow of hydrologic model predictions. This talk will present recent advances in the development and use of new open-source cyberinfrastructure tools for the WRF-Hydro architecture. These tools include new web-accessible pre-processing applications, supercomputer job management applications and automated verification and visualization applications. The tools will be described successively and then demonstrated in a set of flash flood use cases for recent destructive flood events in the U.S. and in Europe. Throughout, an emphasis on the implementation and use of community data standards for data exchange is made.
TSVdb: a web-tool for TCGA splicing variants analysis.
Sun, Wenjie; Duan, Ting; Ye, Panmeng; Chen, Kelie; Zhang, Guanling; Lai, Maode; Zhang, Honghe
2018-05-29
Collaborative projects such as The Cancer Genome Atlas (TCGA) have generated various -omics and clinical data on cancer. Many computational tools have been developed to facilitate the study of the molecular characterization of tumors using data from the TCGA. Alternative splicing of a gene produces splicing variants, and accumulating evidence has revealed its essential role in cancer-related processes, implying the urgent need to discover tumor-specific isoforms and uncover their potential functions in tumorigenesis. We developed TSVdb, a web-based tool, to explore alternative splicing based on TCGA samples with 30 clinical variables from 33 tumors. TSVdb has an integrated and well-proportioned interface for visualization of the clinical data, gene expression, usage of exons/junctions and splicing patterns. Researchers can interpret the isoform expression variations between or across clinical subgroups and estimate the relationships between isoforms and patient prognosis. TSVdb is available at http://www.tsvdb.com , and the source code is available at https://github.com/wenjie1991/TSVdb . TSVdb will inspire oncologists and accelerate isoform-level advances in cancer research.
Analysis of post-mining excavations as places for municipal waste
NASA Astrophysics Data System (ADS)
Górniak-Zimroz, Justyna
2018-01-01
Waste management planning is an interdisciplinary task covering a wide range of issues including costs, legal requirements, spatial planning, environmental protection, geography, demographics, and techniques used in collecting, transporting, processing and disposing of waste. Designing and analyzing this issue is difficult and requires the use of advanced analysis methods and tools available in GIS geographic information systems containing readily available graphical and descriptive databases, data analysis tools providing expert decision support while selecting the best-designed alternative, and simulation models that allow the user to simulate many variants of waste management together with graphical visualization of the results of performed analyzes. As part of the research study, there have been works undertaken concerning the use of multi-criteria data analysis in waste management in areas located in southwestern Poland. These works have proposed the inclusion in waste management of post-mining excavations as places for the final or temporary collection of waste assessed in terms of their suitability with the tools available in GIS systems.
Water facilities in retrospect and prospect: An illuminating tool for vehicle design
NASA Technical Reports Server (NTRS)
Erickson, G. E.; Peak, D. J.; Delfrate, J.; Skow, A. M.; Malcolm, G. N.
1986-01-01
Water facilities play a fundamental role in the design of air, ground, and marine vehicles by providing a qualitative, and sometimes quantitative, description of complex flow phenomena. Water tunnels, channels, and tow tanks used as flow-diagnostic tools have experienced a renaissance in recent years in response to the increased complexity of designs suitable for advanced technology vehicles. These vehicles are frequently characterized by large regions of steady and unsteady three-dimensional flow separation and ensuing vortical flows. The visualization and interpretation of the complicated fluid motions about isolated vehicle components and complete configurations in a time and cost effective manner in hydrodynamic test facilities is a key element in the development of flow control concepts, and, hence, improved vehicle designs. A historical perspective of the role of water facilities in the vehicle design process is presented. The application of water facilities to specific aerodynamic and hydrodynamic flow problems is discussed, and the strengths and limitations of these important experimental tools are emphasized.
Streamlining Collaborative Planning in Spacecraft Mission Architectures
NASA Technical Reports Server (NTRS)
Misra, Dhariti; Bopf, Michel; Fishman, Mark; Jones, Jeremy; Kerbel, Uri; Pell, Vince
2000-01-01
During the past two decades, the planning and scheduling community has substantially increased the capability and efficiency of individual planning and scheduling systems. Relatively recently, research work to streamline collaboration between planning systems is gaining attention. Spacecraft missions stand to benefit substantially from this work as they require the coordination of multiple planning organizations and planning systems. Up to the present time this coordination has demanded a great deal of human intervention and/or extensive custom software development efforts. This problem will become acute with increased requirements for cross-mission plan coordination and multi -spacecraft mission planning. The Advanced Architectures and Automation Branch of NASA's Goddard Space Flight Center is taking innovative steps to define collaborative planning architectures, and to identify coordinated planning tools for Cross-Mission Campaigns. Prototypes are being developed to validate these architectures and assess the usefulness of the coordination tools by the planning community. This presentation will focus on one such planning coordination too], named Visual Observation Layout Tool (VOLT), which is currently being developed to streamline the coordination between astronomical missions
Coastal On-line Assessment and Synthesis Tool 2.0
NASA Technical Reports Server (NTRS)
Brown, Richard; Navard, Andrew; Nguyen, Beth
2011-01-01
COAST (Coastal On-line Assessment and Synthesis Tool) is a 3D, open-source Earth data browser developed by leveraging and enhancing previous NASA open-source tools. These tools use satellite imagery and elevation data in a way that allows any user to zoom from orbit view down into any place on Earth, and enables the user to experience Earth terrain in a visually rich 3D view. The benefits associated with taking advantage of an open-source geo-browser are that it is free, extensible, and offers a worldwide developer community that is available to provide additional development and improvement potential. What makes COAST unique is that it simplifies the process of locating and accessing data sources, and allows a user to combine them into a multi-layered and/or multi-temporal visual analytical look into possible data interrelationships and coeffectors for coastal environment phenomenology. COAST provides users with new data visual analytic capabilities. COAST has been upgraded to maximize use of open-source data access, viewing, and data manipulation software tools. The COAST 2.0 toolset has been developed to increase access to a larger realm of the most commonly implemented data formats used by the coastal science community. New and enhanced functionalities that upgrade COAST to COAST 2.0 include the development of the Temporal Visualization Tool (TVT) plug-in, the Recursive Online Remote Data-Data Mapper (RECORD-DM) utility, the Import Data Tool (IDT), and the Add Points Tool (APT). With these improvements, users can integrate their own data with other data sources, and visualize the resulting layers of different data types (such as spatial and spectral, for simultaneous visual analysis), and visualize temporal changes in areas of interest.
Discovering Tradeoffs, Vulnerabilities, and Dependencies within Water Resources Systems
NASA Astrophysics Data System (ADS)
Reed, P. M.
2015-12-01
There is a growing recognition and interest in using emerging computational tools for discovering the tradeoffs that emerge across complex combinations infrastructure options, adaptive operations, and sign posts. As a field concerned with "deep uncertainties", it is logically consistent to include a more direct acknowledgement that our choices for dealing with computationally demanding simulations, advanced search algorithms, and sensitivity analysis tools are themselves subject to failures that could adversely bias our understanding of how systems' vulnerabilities change with proposed actions. Balancing simplicity versus complexity in our computational frameworks is nontrivial given that we are often exploring high impact irreversible decisions. It is not always clear that accepted models even encompass important failure modes. Moreover as they become more complex and computationally demanding the benefits and consequences of simplifications are often untested. This presentation discusses our efforts to address these challenges through our "many-objective robust decision making" (MORDM) framework for the design and management water resources systems. The MORDM framework has four core components: (1) elicited problem conception and formulation, (2) parallel many-objective search, (3) interactive visual analytics, and (4) negotiated selection of robust alternatives. Problem conception and formulation is the process of abstracting a practical design problem into a mathematical representation. We build on the emerging work in visual analytics to exploit interactive visualization of both the design space and the objective space in multiple heterogeneous linked views that permit exploration and discovery. Many-objective search produces tradeoff solutions from potentially competing problem formulations that can each consider up to ten conflicting objectives based on current computational search capabilities. Negotiated design selection uses interactive visualization, reformulation, and optimization to discover desirable designs for implementation. Multi-city urban water supply portfolio planning will be used to illustrate the MORDM framework.
SNP-VISTA: An interactive SNP visualization tool
Shah, Nameeta; Teplitsky, Michael V; Minovitsky, Simon; Pennacchio, Len A; Hugenholtz, Philip; Hamann, Bernd; Dubchak, Inna L
2005-01-01
Background Recent advances in sequencing technologies promise to provide a better understanding of the genetics of human disease as well as the evolution of microbial populations. Single Nucleotide Polymorphisms (SNPs) are established genetic markers that aid in the identification of loci affecting quantitative traits and/or disease in a wide variety of eukaryotic species. With today's technological capabilities, it has become possible to re-sequence a large set of appropriate candidate genes in individuals with a given disease in an attempt to identify causative mutations. In addition, SNPs have been used extensively in efforts to study the evolution of microbial populations, and the recent application of random shotgun sequencing to environmental samples enables more extensive SNP analysis of co-occurring and co-evolving microbial populations. The program is available at [1]. Results We have developed and present two modifications of an interactive visualization tool, SNP-VISTA, to aid in the analyses of the following types of data: A. Large-scale re-sequence data of disease-related genes for discovery of associated and/or causative alleles (GeneSNP-VISTA). B. Massive amounts of ecogenomics data for studying homologous recombination in microbial populations (EcoSNP-VISTA). The main features and capabilities of SNP-VISTA are: 1) mapping of SNPs to gene structure; 2) classification of SNPs, based on their location in the gene, frequency of occurrence in samples and allele composition; 3) clustering, based on user-defined subsets of SNPs, highlighting haplotypes as well as recombinant sequences; 4) integration of protein evolutionary conservation visualization; and 5) display of automatically calculated recombination points that are user-editable. Conclusion The main strength of SNP-VISTA is its graphical interface and use of visual representations, which support interactive exploration and hence better understanding of large-scale SNP data by the user. PMID:16336665
Molecular brain imaging in the multimodality era
Price, Julie C
2012-01-01
Multimodality molecular brain imaging encompasses in vivo visualization, evaluation, and measurement of cellular/molecular processes. Instrumentation and software developments over the past 30 years have fueled advancements in multimodality imaging platforms that enable acquisition of multiple complementary imaging outcomes by either combined sequential or simultaneous acquisition. This article provides a general overview of multimodality neuroimaging in the context of positron emission tomography as a molecular imaging tool and magnetic resonance imaging as a structural and functional imaging tool. Several image examples are provided and general challenges are discussed to exemplify complementary features of the modalities, as well as important strengths and weaknesses of combined assessments. Alzheimer's disease is highlighted, as this clinical area has been strongly impacted by multimodality neuroimaging findings that have improved understanding of the natural history of disease progression, early disease detection, and informed therapy evaluation. PMID:22434068
Designsafe-Ci a Cyberinfrastructure for Natural Hazard Simulation and Data
NASA Astrophysics Data System (ADS)
Dawson, C.; Rathje, E.; Stanzione, D.; Padgett, J.; Pinelli, J. P.
2017-12-01
DesignSafe is the web-based research platform of the Natural Hazards Engineering Research Infrastructure (NHERI) network that provides the computational tools needed to manage and analyze critical data for natural hazards research, with wind and storm surge related hazards being a primary focus. One of the simulation tools under DesignSafe is the Advanced Circulation (ADCIRC) model, a coastal ocean model used in storm surge analysis. ADCIRC is an unstructured, finite element model with high resolution capabilities for studying storm surge impacts, and has long been used in storm surge hind-casting and forecasting. In this talk, we will demonstrate the use of ADCIRC within the DesignSafe platform and its use for forecasting Hurricane Harvey. We will also demonstrate how to analyze, visualize and archive critical storm surge related data within DesignSafe.
Novel optical methodologies in studying mechanical signal transduction in mammalian cells
NASA Technical Reports Server (NTRS)
Stamatas, G. N.; McIntire, L. V.
1999-01-01
For the last 3 decades evidence has been accumulating that some types of mammalian cells respond to their mechanically active environment by altering their morphology, growth rate, and metabolism. The study of such responses is very important in understanding, physiological and pathological conditions ranging from bone formation to atherosclerosis. Obtaining this knowledge has been the goal for an active research area in bioengineering termed cell mechanotransduction. The advancement of optical methodologies used in cell biology research has given the tools to elucidate cellular mechanisms that would otherwise be impossible to visualize. Combined with molecular biology techniques, they give engineers invaluable tools in understanding the chemical pathways involved in mechanotransduction. Herein we briefly review the current knowledge on mechanical signal transduction in mammalian cells, focusing on the application of novel optical techniques in the ongoing research.
NASA Astrophysics Data System (ADS)
Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan
2016-05-01
With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.
NASA Astrophysics Data System (ADS)
Sarni, W.
2017-12-01
Water scarcity and poor quality impacts economic development, business growth, and social well-being. Water has become, in our generation, the foremost critical local, regional, and global issue of our time. Despite these needs, there is no water hub or water technology accelerator solely dedicated to water data and tools. There is a need by the public and private sectors for vastly improved data management and visualization tools. This is the WetDATA opportunity - to develop a water data tech hub dedicated to water data acquisition, analytics, and visualization tools for informed policy and business decisions. WetDATA's tools will help incubate disruptive water data technologies and accelerate adoption of current water data solutions. WetDATA is a Colorado-based (501c3), global hub for water data analytics and technology innovation. WetDATA's vision is to be a global leader in water information, data technology innovation and collaborate with other US and global water technology hubs. ROADMAP * Portal (www.wetdata.org) to provide stakeholders with tools/resources to understand related water risks. * The initial activities will provide education, awareness and tools to stakeholders to support the implementation of the Colorado State Water Plan. * Leverage the Western States Water Council Water Data Exchange database. * Development of visualization, predictive analytics and AI tools to engage with stakeholders and provide actionable data and information. TOOLS Education: Provide information on water issues and risks at the local, state, national and global scale. Visualizations: Development of data analytics and visualization tools based upon the 2030 Water Resources Group methodology to support the implementation of the Colorado State Water Plan. Predictive Analytics: Accessing publically available water databases and using machine learning to develop water availability forecasting tools, and time lapse images to support city / urban planning.
An evaluation of copy number variation detection tools for cancer using whole exome sequencing data.
Zare, Fatima; Dow, Michelle; Monteleone, Nicholas; Hosny, Abdelrahman; Nabavi, Sheida
2017-05-31
Recently copy number variation (CNV) has gained considerable interest as a type of genomic/genetic variation that plays an important role in disease susceptibility. Advances in sequencing technology have created an opportunity for detecting CNVs more accurately. Recently whole exome sequencing (WES) has become primary strategy for sequencing patient samples and study their genomics aberrations. However, compared to whole genome sequencing, WES introduces more biases and noise that make CNV detection very challenging. Additionally, tumors' complexity makes the detection of cancer specific CNVs even more difficult. Although many CNV detection tools have been developed since introducing NGS data, there are few tools for somatic CNV detection for WES data in cancer. In this study, we evaluated the performance of the most recent and commonly used CNV detection tools for WES data in cancer to address their limitations and provide guidelines for developing new ones. We focused on the tools that have been designed or have the ability to detect cancer somatic aberrations. We compared the performance of the tools in terms of sensitivity and false discovery rate (FDR) using real data and simulated data. Comparative analysis of the results of the tools showed that there is a low consensus among the tools in calling CNVs. Using real data, tools show moderate sensitivity (~50% - ~80%), fair specificity (~70% - ~94%) and poor FDRs (~27% - ~60%). Also, using simulated data we observed that increasing the coverage more than 10× in exonic regions does not improve the detection power of the tools significantly. The limited performance of the current CNV detection tools for WES data in cancer indicates the need for developing more efficient and precise CNV detection methods. Due to the complexity of tumors and high level of noise and biases in WES data, employing advanced novel segmentation, normalization and de-noising techniques that are designed specifically for cancer data is necessary. Also, CNV detection development suffers from the lack of a gold standard for performance evaluation. Finally, developing tools with user-friendly user interfaces and visualization features can enhance CNV studies for a broader range of users.
Cross-Dataset Analysis and Visualization Driven by Expressive Web Services
NASA Astrophysics Data System (ADS)
Alexandru Dumitru, Mircea; Catalin Merticariu, Vlad
2015-04-01
The deluge of data that is hitting us every day from satellite and airborne sensors is changing the workflow of environmental data analysts and modelers. Web geo-services play now a fundamental role, and are no longer needed to preliminary download and store the data, but rather they interact in real-time with GIS applications. Due to the very large amount of data that is curated and made available by web services, it is crucial to deploy smart solutions for optimizing network bandwidth, reducing duplication of data and moving the processing closer to the data. In this context we have created a visualization application for analysis and cross-comparison of aerosol optical thickness datasets. The application aims to help researchers identify and visualize discrepancies between datasets coming from various sources, having different spatial and time resolutions. It also acts as a proof of concept for integration of OGC Web Services under a user-friendly interface that provides beautiful visualizations of the explored data. The tool was built on top of the World Wind engine, a Java based virtual globe built by NASA and the open source community. For data retrieval and processing we exploited the OGC Web Coverage Service potential: the most exciting aspect being its processing extension, a.k.a. the OGC Web Coverage Processing Service (WCPS) standard. A WCPS-compliant service allows a client to execute a processing query on any coverage offered by the server. By exploiting a full grammar, several different kinds of information can be retrieved from one or more datasets together: scalar condensers, cross-sectional profiles, comparison maps and plots, etc. This combination of technology made the application versatile and portable. As the processing is done on the server-side, we ensured that the minimal amount of data is transferred and that the processing is done on a fully-capable server, leaving the client hardware resources to be used for rendering the visualization. The application offers a set of features to visualize and cross-compare the datasets. Users can select a region of interest in space and time on which an aerosol map layer is plotted. Hovmoeller time-latitude and time-longitude profiles can be displayed by selecting orthogonal cross-sections on the globe. Statistics about the selected dataset are also displayed in different text and plot formats. The datasets can also be cross-compared either by using the delta map tool or the merged map tool. For more advanced users, a WCPS query console is also offered allowing users to process their data with ad-hoc queries and then choose how to display the results. Overall, the user has a rich set of tools that can be used to visualize and cross-compare the aerosol datasets. With our application we have shown how the NASA WorldWind framework can be used to display results processed efficiently - and entirely - on the server side using the expressiveness of the OGC WCPS web-service. The application serves not only as a proof of concept of a new paradigm in working with large geospatial data but also as an useful tool for environmental data analysts.
2016-11-01
Display Design, Methods , and Results for a User Study by Christopher J Garneau and Robert F Erbacher Approved for public...NOV 2016 US Army Research Laboratory Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods ...January 2013–September 2015 4. TITLE AND SUBTITLE Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods
Visual impairment and traits of autism in children.
Wrzesińska, Magdalena; Kapias, Joanna; Nowakowska-Domagała, Katarzyna; Kocur, Józef
2017-04-30
Visual impairment present from birth or from an early childhood may lead to psychosocial and emotional disorders. 11-40% of children in the group with visual impairment show traits of autism. The aim of this paper was to present the selected examples of how visual impairment in children is related to the occurrence of autism and to describe the available tools for diagnosing autism in children with visual impairment. So far the relation between visual impairment in children and autism has not been sufficiently confirmed. Psychiatric and psychological diagnosis of children with visual impairment has some difficulties in differentiating between "blindism" and traits typical for autism resulting from a lack of standardized diagnostic tools used to diagnosing children with visual impairment. Another difficulty in diagnosing autism in children with visual impairment is the coexistence of other disabilities in case of most children with vision impairment. Additionally, apart from difficulties in diagnosing autistic disorders in children with eye dysfunctions there is also a question of what tools should be used in therapy and rehabilitation of patients.
Comparative analysis and visualization of multiple collinear genomes
2012-01-01
Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897
Interactive Visualization of Dependencies
ERIC Educational Resources Information Center
Moreno, Camilo Arango; Bischof, Walter F.; Hoover, H. James
2012-01-01
We present an interactive tool for browsing course requisites as a case study of dependency visualization. This tool uses multiple interactive visualizations to allow the user to explore the dependencies between courses. A usability study revealed that the proposed browser provides significant advantages over traditional methods, in terms of…
Visualizing Qualitative Information
ERIC Educational Resources Information Center
Slone, Debra J.
2009-01-01
The abundance of qualitative data in today's society and the need to easily scrutinize, digest, and share this information calls for effective visualization and analysis tools. Yet, no existing qualitative tools have the analytic power, visual effectiveness, and universality of familiar quantitative instruments like bar charts, scatter-plots, and…
Unit Planning Grids for Visual Arts--Grade 9-12 Advanced.
ERIC Educational Resources Information Center
Delaware State Dept. of Education, Dover.
This planning grid for teaching visual arts (advanced) in grades 9-12 in Delaware outlines the following six standards for students to complete: (1) students will select and use form, media, techniques, and processes to create works of art and communicate meaning; (2) students will create ways to use visual, spatial, and temporal concepts in…
ERIC Educational Resources Information Center
DePountis, Vicki M.; Pogrund, Rona L.; Griffin-Shirley, Nora; Lan, William Y.
2015-01-01
Introduction: This research examined the perspectives of teachers of students who are visually impaired regarding the use and effectiveness of high-tech assistive technology purported to assist visually impaired students in advanced mathematics. Methods: The data for this study were collected via a mixed-methods online survey distributed through…
Visualization of protein interaction networks: problems and solutions
2013-01-01
Background Visualization concerns the representation of data visually and is an important task in scientific research. Protein-protein interactions (PPI) are discovered using either wet lab techniques, such mass spectrometry, or in silico predictions tools, resulting in large collections of interactions stored in specialized databases. The set of all interactions of an organism forms a protein-protein interaction network (PIN) and is an important tool for studying the behaviour of the cell machinery. Since graphic representation of PINs may highlight important substructures, e.g. protein complexes, visualization is more and more used to study the underlying graph structure of PINs. Although graphs are well known data structures, there are different open problems regarding PINs visualization: the high number of nodes and connections, the heterogeneity of nodes (proteins) and edges (interactions), the possibility to annotate proteins and interactions with biological information extracted by ontologies (e.g. Gene Ontology) that enriches the PINs with semantic information, but complicates their visualization. Methods In these last years many software tools for the visualization of PINs have been developed. Initially thought for visualization only, some of them have been successively enriched with new functions for PPI data management and PIN analysis. The paper analyzes the main software tools for PINs visualization considering four main criteria: (i) technology, i.e. availability/license of the software and supported OS (Operating System) platforms; (ii) interoperability, i.e. ability to import/export networks in various formats, ability to export data in a graphic format, extensibility of the system, e.g. through plug-ins; (iii) visualization, i.e. supported layout and rendering algorithms and availability of parallel implementation; (iv) analysis, i.e. availability of network analysis functions, such as clustering or mining of the graph, and the possibility to interact with external databases. Results Currently, many tools are available and it is not easy for the users choosing one of them. Some tools offer sophisticated 2D and 3D network visualization making available many layout algorithms, others tools are more data-oriented and support integration of interaction data coming from different sources and data annotation. Finally, some specialistic tools are dedicated to the analysis of pathways and cellular processes and are oriented toward systems biology studies, where the dynamic aspects of the processes being studied are central. Conclusion A current trend is the deployment of open, extensible visualization tools (e.g. Cytoscape), that may be incrementally enriched by the interactomics community with novel and more powerful functions for PIN analysis, through the development of plug-ins. On the other hand, another emerging trend regards the efficient and parallel implementation of the visualization engine that may provide high interactivity and near real-time response time, as in NAViGaTOR. From a technological point of view, open-source, free and extensible tools, like Cytoscape, guarantee a long term sustainability due to the largeness of the developers and users communities, and provide a great flexibility since new functions are continuously added by the developer community through new plug-ins, but the emerging parallel, often closed-source tools like NAViGaTOR, can offer near real-time response time also in the analysis of very huge PINs. PMID:23368786
Accessing and Visualizing scientific spatiotemporal data
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Bergou, Attila; Berriman, Bruce G.; Block, Gary L.; Collier, Jim; Curkendall, David W.; Good, John; Husman, Laura; Jacob, Joseph C.; Laity, Anastasia;
2004-01-01
This paper discusses work done by JPL 's Parallel Applications Technologies Group in helping scientists access and visualize very large data sets through the use of multiple computing resources, such as parallel supercomputers, clusters, and grids These tools do one or more of the following tasks visualize local data sets for local users, visualize local data sets for remote users, and access and visualize remote data sets The tools are used for various types of data, including remotely sensed image data, digital elevation models, astronomical surveys, etc The paper attempts to pull some common elements out of these tools that may be useful for others who have to work with similarly large data sets.
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-01-01
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-04-05
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.
The advanced magnetovision system for Smart application
NASA Astrophysics Data System (ADS)
Kaleta, Jerzy; Wiewiórski, Przemyslaw; Lewandowski, Daniel
2010-04-01
An original method, measurement devices and software tool for examination of magneto-mechanical phenomena in wide range of SMART applications is proposed. In many Hi-End market constructions it is necessary to carry out examinations of mechanical and magnetic properties simultaneously. Technological processes of fabrication of modern materials (for example cutting, premagnetisation and prestress) and advanced concept of using SMART structures involves the design of next generation system for optimization of electric and magnetic field distribution. The original fast and higher than million point static resolution scanner with mulitsensor probes has been constructed to measure full components of the magnetic field intensity vector H, and to visualize them into end user acceptable variant. The scanner has also the capability to acquire electric potentials on surface to work with magneto-piezo devices. Advanced electronic subsystems have been applied for processing of results in the Magscaner Vison System and the corresponding software - Maglab has been also evaluated. The Dipole Contour Method (DCM) is provided for modeling different states between magnetic and electric coupled materials and to visually explain the information of the experimental data. Dedicated software collaborating with industrial parametric systems CAD. Measurement technique consists of acquiring a cloud of points similarly as in tomography, 3D visualisation. The actually carried verification of abilities of 3D digitizer will enable inspection of SMART actuators with the cylindrical form, pellets with miniature sizes designed for oscillations dampers in various construction, for example in vehicle industry.
King, Anthony J; Fernie, Gordon; Azuara-Blanco, Augusto; Burr, Jennifer M; Garway-Heath, Ted; Sparrow, John M; Vale, Luke; Hudson, Jemma; MacLennan, Graeme; McDonald, Alison; Barton, Keith; Norrie, John
2017-10-26
Presentation with advanced glaucoma is the major risk factor for lifetime blindness. Effective intervention at diagnosis is expected to minimise risk of further visual loss in this group of patients. To compare clinical and cost-effectiveness of primary medical management compared with primary surgery for people presenting with advanced open-angle glaucoma (OAG). Design : A prospective, pragmatic multicentre randomised controlled trial (RCT). Twenty-seven UK hospital eye services. Four hundred and forty patients presenting with advanced OAG, according to the Hodapp-Parish-Anderson classification of visual field loss. Participants will be randomised to medical treatment or augmented trabeculectomy (1:1 allocation minimised by centre and presence of advanced disease in both eyes). The primary outcome is vision-related quality of life measured by the National Eye Institute-Visual Function Questionnaire-25 at 24 months. Secondary outcomes include generic EQ-5D-5L, Health Utility Index-3 and glaucoma-related health status (Glaucoma Utility Index), patient experience, visual field measured by mean deviation value, logarithm of the mean angle of resolution visual acuity, intraocular pressure, adverse events, standards for driving and eligibility for blind certification. Incremental cost per quality-adjusted life-year (QALY) based on EQ-5D-5L and glaucoma profile instrument will be estimated. The study will report the comparative effectiveness and cost-effectiveness of medical treatment against augmented trabeculectomy in patients presenting with advanced glaucoma in terms of patient-reported health and visual function, clinical outcomes and incremental cost per QALY at 2 years. Treatment of Advanced Glaucoma Study will be the first RCT reporting outcomes from the perspective of those with advanced glaucoma. ISRCTN56878850, Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA's Global Imagery Browse Services - Technologies for Visualizing Earth Science Data
NASA Astrophysics Data System (ADS)
Cechini, M. F.; Boller, R. A.; Baynes, K.; Schmaltz, J. E.; Thompson, C. K.; Roberts, J. T.; Rodriguez, J.; Wong, M. M.; King, B. A.; King, J.; De Luca, A. P.; Pressley, N. N.
2017-12-01
For more than 20 years, the NASA Earth Observing System (EOS) has collected earth science data for thousands of scientific parameters now totaling nearly 15 Petabytes of data. In 2013, NASA's Global Imagery Browse Services (GIBS) formed its vision to "transform how end users interact and discover [EOS] data through visualizations." This vision included leveraging scientific and community best practices and standards to provide a scalable, compliant, and authoritative source for EOS earth science data visualizations. Since that time, GIBS has grown quickly and now services millions of daily requests for over 500 imagery layers representing hundreds of earth science parameters to a broad community of users. For many of these parameters, visualizations are available within hours of acquisition from the satellite. For others, visualizations are available for the entire mission of the satellite. The GIBS system is built upon the OnEarth and MRF open source software projects, which are provided by the GIBS team. This software facilitates standards-based access for compliance with existing GIS tools. The GIBS imagery layers are predominantly rasterized images represented in two-dimensional coordinate systems, though multiple projections are supported. The OnEarth software also supports the GIBS ingest pipeline to facilitate low latency updates to new or updated visualizations. This presentation will focus on the following topics: Overview of GIBS visualizations and user community Current benefits and limitations of the OnEarth and MRF software projects and related standards GIBS access methods and their in/compatibilities with existing GIS libraries and applications Considerations for visualization accuracy and understandability Future plans for more advanced visualization concepts including Vertical Profiles and Vector-Based Representations Future plans for Amazon Web Service support and deployments
Rethinking Visual Analytics for Streaming Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris
In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive, complex, incomplete, and uncertain in scenarios requiring human judgment.« less
Social network analysis in the study of nonhuman primates: A historical perspective
Brent, Lauren J.N.; Lehmann, Julia; Ramos-Fernández, Gabriel
2011-01-01
Advances over the last fifteen years have made social network analysis (SNA) a powerful tool for the study of nonhuman primate social behavior. Although many SNA-based techniques have been only very recently adopted in primatological research, others have been commonly used by primatologists for decades. The roots of SNA also stem from some of the same conceptual frameworks as the majority of nonhuman primate behavioral research. The rapid development of SNA in recent years has led to questions within the primatological community of where and how SNA fits within this field. We aim to address these questions by providing an overview of the historical relationship between SNA and the study of nonhuman primates. We begin with a brief history of the development of SNA, followed by a detailed description of the network-based visualization techniques, analytical methods and conceptual frameworks which have been employed by primatologists since as early as the 1960s. We also introduce some of the latest advances to SNA, thereby demonstrating that this approach contains novel tools for study of nonhuman primate social behavior which may be used to shed light on questions that cannot be addressed fully using more conventional methods. PMID:21433047
Information visualization of the minority game
NASA Astrophysics Data System (ADS)
Jiang, W.; Herbert, R. D.; Webber, R.
2008-02-01
Many dynamical systems produce large quantities of data. How can the system be understood from the output data? Often people are simply overwhelmed by the data. Traditional tools such as tables and plots are often not adequate, and new techniques are needed to help people to analyze the system. In this paper, we propose the use of two spacefilling visualization tools to examine the output from a complex agent-based financial model. We measure the effectiveness and performance of these tools through usability experiments. Based on the experimental results, we develop two new visualization techniques that combine the advantages and discard the disadvantages of the information visualization tools. The model we use is an evolutionary version of the Minority Game which simulates a financial market.
Distributed visualization of gridded geophysical data: the Carbon Data Explorer, version 0.2.3
NASA Astrophysics Data System (ADS)
Endsley, K. A.; Billmire, M. G.
2016-01-01
Due to the proliferation of geophysical models, particularly climate models, the increasing resolution of their spatiotemporal estimates of Earth system processes, and the desire to easily share results with collaborators, there is a genuine need for tools to manage, aggregate, visualize, and share data sets. We present a new, web-based software tool - the Carbon Data Explorer - that provides these capabilities for gridded geophysical data sets. While originally developed for visualizing carbon flux, this tool can accommodate any time-varying, spatially explicit scientific data set, particularly NASA Earth system science level III products. In addition, the tool's open-source licensing and web presence facilitate distributed scientific visualization, comparison with other data sets and uncertainty estimates, and data publishing and distribution.
Empirical Comparison of Visualization Tools for Larger-Scale Network Analysis
Pavlopoulos, Georgios A.; Paez-Espino, David; Kyrpides, Nikos C.; ...
2017-07-18
Gene expression, signal transduction, protein/chemical interactions, biomedical literature cooccurrences, and other concepts are often captured in biological network representations where nodes represent a certain bioentity and edges the connections between them. While many tools to manipulate, visualize, and interactively explore such networks already exist, only few of them can scale up and follow today’s indisputable information growth. In this review, we shortly list a catalog of available network visualization tools and, from a user-experience point of view, we identify four candidate tools suitable for larger-scale network analysis, visualization, and exploration. Lastly, we comment on their strengths and their weaknesses andmore » empirically discuss their scalability, user friendliness, and postvisualization capabilities.« less
Empirical Comparison of Visualization Tools for Larger-Scale Network Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlopoulos, Georgios A.; Paez-Espino, David; Kyrpides, Nikos C.
Gene expression, signal transduction, protein/chemical interactions, biomedical literature cooccurrences, and other concepts are often captured in biological network representations where nodes represent a certain bioentity and edges the connections between them. While many tools to manipulate, visualize, and interactively explore such networks already exist, only few of them can scale up and follow today’s indisputable information growth. In this review, we shortly list a catalog of available network visualization tools and, from a user-experience point of view, we identify four candidate tools suitable for larger-scale network analysis, visualization, and exploration. Lastly, we comment on their strengths and their weaknesses andmore » empirically discuss their scalability, user friendliness, and postvisualization capabilities.« less
Listening to data from the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake
NASA Astrophysics Data System (ADS)
Peng, Z.; Aiken, C.; Kilb, D. L.; Shelly, D. R.; Enescu, B.
2011-12-01
It is important for seismologists to effectively convey information about catastrophic earthquakes, such as the magnitude 9.0 earthquake in Tohoku-Oki, Japan, to general audience who may not necessarily be well-versed in the language of earthquake seismology. Given recent technological advances, previous approaches of using "snapshot" static images to represent earthquake data is now becoming obsolete, and the favored venue to explain complex wave propagation inside the solid earth and interactions among earthquakes is now visualizations that include auditory information. Here, we convert seismic data into visualizations that include sounds, the latter being a term known as 'audification', or continuous 'sonification'. By combining seismic auditory and visual information, static "snapshots" of earthquake data come to life, allowing pitch and amplitude changes to be heard in sync with viewed frequency changes in the seismograms and associated spectragrams. In addition, these visual and auditory media allow the viewer to relate earthquake generated seismic signals to familiar sounds such as thunder, popcorn popping, rattlesnakes, firecrackers, etc. We present a free software package that uses simple MATLAB tools and Apple Inc's QuickTime Pro to automatically convert seismic data into auditory movies. We focus on examples of seismic data from the 2011 Tohoku-Oki earthquake. These examples range from near-field strong motion recordings that demonstrate the complex source process of the mainshock and early aftershocks, to far-field broadband recordings that capture remotely triggered deep tremor and shallow earthquakes. We envision audification of seismic data, which is geared toward a broad range of audiences, will be increasingly used to convey information about notable earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc). Our overarching goal is that sharing our new visualization tool will foster an interest in seismology, not just for young scientists but also for people of all ages.
Transforming Polar Research with Google Glass Augmented Reality (Invited)
NASA Astrophysics Data System (ADS)
Ruthkoski, T.
2013-12-01
Augmented reality is a new technology with the potential to accelerate the advancement of science, particularly in geophysical research. Augmented reality is defined as a live, direct or indirect, view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. When paired with advanced computing techniques on cloud resources, augmented reality has the potential to improve data collection techniques, visualizations, as well as in-situ analysis for many areas of research. Google is currently a pioneer of augmented reality technology and has released beta versions of their wearable computing device, Google Glass, to a select number of developers and beta testers. This community of 'Glass Explorers' is the vehicle from which Google shapes the future of their augmented reality device. Example applications of Google Glass in geophysical research range from use as a data gathering interface in harsh climates to an on-site visualization and analysis tool. Early participation in the shaping of the Google Glass device is an opportunity for researchers to tailor this new technology to their specific needs. The purpose of this presentation is to provide geophysical researchers with a hands-on first look at Google Glass and its potential as a scientific tool. Attendees will be given an overview of the technical specifications as well as a live demonstration of the device. Potential applications to geophysical research in polar regions will be the primary focus. The presentation will conclude with an open call to participate, during which attendees may indicate interest in developing projects that integrate Google Glass into their research. Application Mockup: Penguin Counter Google Glass Augmented Reality Device
Transforming Polar Research with Google Glass Augmented Reality (Invited)
NASA Astrophysics Data System (ADS)
Ramachandran, R.; McEniry, M.; Maskey, M.
2011-12-01
Augmented reality is a new technology with the potential to accelerate the advancement of science, particularly in geophysical research. Augmented reality is defined as a live, direct or indirect, view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. When paired with advanced computing techniques on cloud resources, augmented reality has the potential to improve data collection techniques, visualizations, as well as in-situ analysis for many areas of research. Google is currently a pioneer of augmented reality technology and has released beta versions of their wearable computing device, Google Glass, to a select number of developers and beta testers. This community of 'Glass Explorers' is the vehicle from which Google shapes the future of their augmented reality device. Example applications of Google Glass in geophysical research range from use as a data gathering interface in harsh climates to an on-site visualization and analysis tool. Early participation in the shaping of the Google Glass device is an opportunity for researchers to tailor this new technology to their specific needs. The purpose of this presentation is to provide geophysical researchers with a hands-on first look at Google Glass and its potential as a scientific tool. Attendees will be given an overview of the technical specifications as well as a live demonstration of the device. Potential applications to geophysical research in polar regions will be the primary focus. The presentation will conclude with an open call to participate, during which attendees may indicate interest in developing projects that integrate Google Glass into their research. Application Mockup: Penguin Counter Google Glass Augmented Reality Device
Steady-state visual evoked potentials as a research tool in social affective neuroscience
Wieser, Matthias J.; Miskovic, Vladimir; Keil, Andreas
2017-01-01
Like many other primates, humans place a high premium on social information transmission and processing. One important aspect of this information concerns the emotional state of other individuals, conveyed by distinct visual cues such as facial expressions, overt actions, or by cues extracted from the situational context. A rich body of theoretical and empirical work has demonstrated that these socio-emotional cues are processed by the human visual system in a prioritized fashion, in the service of optimizing social behavior. Furthermore, socio-emotional perception is highly dependent on situational contexts and previous experience. Here, we review current issues in this area of research and discuss the utility of the steady-state visual evoked potential (ssVEP) technique for addressing key empirical questions. Methodological advantages and caveats are discussed with particular regard to quantifying time-varying competition among multiple perceptual objects, trial-by-trial analysis of visual cortical activation, functional connectivity, and the control of low-level stimulus features. Studies on facial expression and emotional scene processing are summarized, with an emphasis on viewing faces and other social cues in emotional contexts, or when competing with each other. Further, because the ssVEP technique can be readily accommodated to studying the viewing of complex scenes with multiple elements, it enables researchers to advance theoretical models of socio-emotional perception, based on complex, quasi-naturalistic viewing situations. PMID:27699794
Great bowerbirds create theaters with forced perspective when seen by their audience.
Endler, John A; Endler, Lorna C; Doerr, Natalie R
2010-09-28
Birds in the infraorder Corvida [1] (ravens, jays, bowerbirds) are renowned for their cognitive abilities [2-4], which include advanced problem solving with spatial inference [4-8], tool use and complex constructions [7-10], and bowerbird cognitive ability is associated with mating success [11]. Great bowerbird males construct bowers with a long avenue from within which females view the male displaying over his bower court [10]. This predictable audience viewpoint is a prerequisite for forced (altered) visual perspective [12-14]. Males make courts with gray and white objects that increase in size with distance from the avenue entrance. This gradient creates forced visual perspective for the audience; court object visual angles subtended on the female viewer's eye are more uniform than if the objects were placed at random. Forced perspective can yield false perception of size and distance [12, 15]. After experimental reversal of their size-distance gradient, males recovered their gradients within 3 days, and there was little difference from the original after 2 wks. Variation among males in their forced-perspective quality as seen by their female audience indicates that visual perspective is available for use in mate choice, perhaps as an indicator of cognitive ability. Regardless of function, the creation and maintenance of forced visual perspective is clearly important to great bowerbirds and suggests the possibility of a previously unknown dimension of bird cognition. Copyright © 2010 Elsevier Ltd. All rights reserved.
A web-based data visualization tool for the MIMIC-II database.
Lee, Joon; Ribey, Evan; Wallace, James R
2016-02-04
Although MIMIC-II, a public intensive care database, has been recognized as an invaluable resource for many medical researchers worldwide, becoming a proficient MIMIC-II researcher requires knowledge of SQL programming and an understanding of the MIMIC-II database schema. These are challenging requirements especially for health researchers and clinicians who may have limited computer proficiency. In order to overcome this challenge, our objective was to create an interactive, web-based MIMIC-II data visualization tool that first-time MIMIC-II users can easily use to explore the database. The tool offers two main features: Explore and Compare. The Explore feature enables the user to select a patient cohort within MIMIC-II and visualize the distributions of various administrative, demographic, and clinical variables within the selected cohort. The Compare feature enables the user to select two patient cohorts and visually compare them with respect to a variety of variables. The tool is also helpful to experienced MIMIC-II researchers who can use it to substantially accelerate the cumbersome and time-consuming steps of writing SQL queries and manually visualizing extracted data. Any interested researcher can use the MIMIC-II data visualization tool for free to quickly and conveniently conduct a preliminary investigation on MIMIC-II with a few mouse clicks. Researchers can also use the tool to learn the characteristics of the MIMIC-II patients. Since it is still impossible to conduct multivariable regression inside the tool, future work includes adding analytics capabilities. Also, the next version of the tool will aim to utilize MIMIC-III which contains more data.
Martinez, Ramon; Ordunez, Pedro; Soliz, Patricia N; Ballesteros, Michael F
2016-04-01
The complexity of current injury-related health issues demands the usage of diverse and massive data sets for comprehensive analyses, and application of novel methods to communicate data effectively to the public health community, decision-makers and the public. Recent advances in information visualisation, availability of new visual analytic methods and tools, and progress on information technology provide an opportunity for shaping the next generation of injury surveillance. To introduce data visualisation conceptual bases, and propose a visual analytic and visualisation platform in public health surveillance for injury prevention and control. The paper introduces data visualisation conceptual bases, describes a visual analytic and visualisation platform, and presents two real-world case studies illustrating their application in public health surveillance for injury prevention and control. Application of visual analytic and visualisation platform is presented as solution for improved access to heterogeneous data sources, enhance data exploration and analysis, communicate data effectively, and support decision-making. Applications of data visualisation concepts and visual analytic platform could play a key role to shape the next generation of injury surveillance. Visual analytic and visualisation platform could improve data use, the analytic capacity, and ability to effectively communicate findings and key messages. The public health surveillance community is encouraged to identify opportunities to develop and expand its use in injury prevention and control. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Live Cell Visualization of Multiple Protein-Protein Interactions with BiFC Rainbow.
Wang, Sheng; Ding, Miao; Xue, Boxin; Hou, Yingping; Sun, Yujie
2018-05-18
As one of the most powerful tools to visualize PPIs in living cells, bimolecular fluorescence complementation (BiFC) has gained great advancement during recent years, including deep tissue imaging with far-red or near-infrared fluorescent proteins or super-resolution imaging with photochromic fluorescent proteins. However, little progress has been made toward simultaneous detection and visualization of multiple PPIs in the same cell, mainly due to the spectral crosstalk. In this report, we developed novel BiFC assays based on large-Stokes-shift fluorescent proteins (LSS-FPs) to detect and visualize multiple PPIs in living cells. With the large excitation/emission spectral separation, LSS-FPs can be imaged together with normal Stokes shift fluorescent proteins to realize multicolor BiFC imaging using a simple illumination scheme. We also further demonstrated BiFC rainbow combining newly developed BiFC assays with previously established mCerulean/mVenus-based BiFC assays to achieve detection and visualization of four PPI pairs in the same cell. Additionally, we prove that with the complete spectral separation of mT-Sapphire and CyOFP1, LSS-FP-based BiFC assays can be readily combined with intensity-based FRET measurement to detect ternary protein complex formation with minimal spectral crosstalk. Thus, our newly developed LSS-FP-based BiFC assays not only expand the fluorescent protein toolbox available for BiFC but also facilitate the detection and visualization of multiple protein complex interactions in living cells.
Roets-Merken, Lieve M; Zuidema, Sytse U; Vernooij-Dassen, Myrra J F J; Kempen, Gertrudis I J M
2014-11-01
This study investigated the psychometric properties of the Severe Dual Sensory Loss screening tool, a tool designed to help nurses and care assistants to identify hearing, visual and dual sensory impairment in older adults. Construct validity of the Severe Dual Sensory Loss screening tool was evaluated using Crohnbach's alpha and factor analysis. Interrater reliability was calculated using Kappa statistics. To evaluate the predictive validity, sensitivity and specificity were calculated by comparison with the criterion standard assessment for hearing and vision. The criterion used for hearing impairment was a hearing loss of ≥40 decibel measured by pure-tone audiometry, and the criterion for visual impairment was a visual acuity of ≤0.3 diopter or a visual field of ≤0.3°. Feasibility was evaluated by the time needed to fill in the screening tool and the clarity of the instruction and items. Prevalence of dual sensory impairment was calculated. A total of 56 older adults receiving aged care and 12 of their nurses and care assistants participated in the study. Crohnbach's alpha was 0.81 for the hearing subscale and 0.84 for the visual subscale. Factor analysis showed two constructs for hearing and two for vision. Kappa was 0.71 for the hearing subscale and 0.74 for the visual subscale. The predictive validity showed a sensitivity of 0.71 and a specificity of 0.72 for the hearing subscale; and a sensitivity of 0.69 and a specificity of 0.78 for the visual subscale. The optimum cut-off point for each subscale was score 1. The nurses and care assistants reported that the Severe Dual Sensory Loss screening tool was easy to use. The prevalence of hearing and vision impairment was 55% and 29%, respectively, and that of dual sensory impairment was 20%. The Severe Dual Sensory Loss screening tool was compared with the criterion standards for hearing and visual impairment and was found a valid and reliable tool, enabling nurses and care assistants to identify hearing, visual and dual sensory impairment among older adults. Copyright © 2014 Elsevier Ltd. All rights reserved.
Challenges Facing Design and Analysis Tools
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Broduer, Steve (Technical Monitor)
2001-01-01
The design and analysis of future aerospace systems will strongly rely on advanced engineering analysis tools used in combination with risk mitigation procedures. The implications of such a trend place increased demands on these tools to assess off-nominal conditions, residual strength, damage propagation, and extreme loading conditions in order to understand and quantify these effects as they affect mission success. Advances in computer hardware such as CPU processing speed, memory, secondary storage, and visualization provide significant resources for the engineer to exploit in engineering design. The challenges facing design and analysis tools fall into three primary areas. The first area involves mechanics needs such as constitutive modeling, contact and penetration simulation, crack growth prediction, damage initiation and progression prediction, transient dynamics and deployment simulations, and solution algorithms. The second area involves computational needs such as fast, robust solvers, adaptivity for model and solution strategies, control processes for concurrent, distributed computing for uncertainty assessments, and immersive technology. Traditional finite element codes still require fast direct solvers which when coupled to current CPU power enables new insight as a result of high-fidelity modeling. The third area involves decision making by the analyst. This area involves the integration and interrogation of vast amounts of information - some global in character while local details are critical and often drive the design. The proposed presentation will describe and illustrate these areas using composite structures, energy-absorbing structures, and inflatable space structures. While certain engineering approximations within the finite element model may be adequate for global response prediction, they generally are inadequate in a design setting or when local response prediction is critical. Pitfalls to be avoided and trends for emerging analysis tools will be described.
Sinking Maps: A Conceptual Tool for Visual Metaphor
ERIC Educational Resources Information Center
Giampa, Joan Marie
2012-01-01
Sinking maps, created by Northern Virginia Community College professor Joan Marie Giampa, are tools that teach fine art students how to construct visual metaphor by conceptually mapping sensory perceptions. Her dissertation answers the question, "Can visual metaphor be conceptually mapped in the art classroom?" In the Prologue, Giampa…
An Interior Signage System for the USAF Academy Hospital
1979-08-01
manner. Graphic Design - Graphic design is a design for visual communication . Graphic Design Tools - There are four basic graphic design tools available...specializes in the design of two dimensional visual communication components. The graphic designer utilizes the four graphic design tools in developing
NASA Astrophysics Data System (ADS)
Mendoza, A. M. M.; Rastaetter, L.; Kuznetsova, M. M.; Mays, M. L.; Chulaki, A.; Shim, J. S.; MacNeice, P. J.; Taktakishvili, A.; Collado-Vega, Y. M.; Weigand, C.; Zheng, Y.; Mullinix, R.; Patel, K.; Pembroke, A. D.; Pulkkinen, A. A.; Boblitt, J. M.; Bakshi, S. S.; Tsui, T.
2017-12-01
The Community Coordinated Modeling Center (CCMC), with the fundamental goal of aiding the transition of modern space science models into space weather forecasting while supporting space science research, has been serving as an integral hub for over 15 years, providing invaluable resources to both space weather scientific and operational communities. CCMC has developed and provided innovative web-based point of access tools varying from: Runs-On-Request System - providing unprecedented global access to the largest collection of state-of-the-art solar and space physics models, Integrated Space Weather Analysis (iSWA) - a powerful dissemination system for space weather information, Advanced Online Visualization and Analysis tools for more accurate interpretation of model results, Standard Data formats for Simulation Data downloads, and Mobile apps to view space weather data anywhere to the scientific community. In addition to supporting research and performing model evaluations, CCMC also supports space science education by hosting summer students through local universities. In this poster, we will showcase CCMC's latest innovative tools and services, and CCMC's tools that revolutionized the way we do research and improve our operational space weather capabilities. CCMC's free tools and resources are all publicly available online (http://ccmc.gsfc.nasa.gov).
Takahashi, Chie; Watt, Simon J.
2014-01-01
When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the “weight” given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots) with different “gains” between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber's law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modeled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimizing the design of visual-haptic devices. PMID:24592245
Persson, Johanna; Dalholm, Elisabeth Hornyánszky; Johansson, Gerd
2014-01-01
To demonstrate the use of visualization and simulation tools in order to involve stakeholders and inform the process in hospital change processes, illustrated by an empirical study from a children's emergency clinic. Reorganization and redevelopment of a hospital is a complex activity that involves many stakeholders and demands. Visualization and simulation tools have proven useful for involving practitioners and eliciting relevant knowledge. More knowledge is desired about how these tools can be implemented in practice for hospital planning processes. A participatory planning process including practitioners and researchers was executed over a 3-year period to evaluate a combination of visualization and simulation tools to involve stakeholders in the planning process and to elicit knowledge about needs and requirements. The initial clinic proposal from the architect was discarded as a result of the empirical study. Much general knowledge about the needs of the organization was extracted by means of the adopted tools. Some of the tools proved to be more accessible than others for the practitioners participating in the study. The combination of tools added value to the process by presenting information in alternative ways and eliciting questions from different angles. Visualization and simulation tools inform a planning process (or other types of change processes) by providing the means to see beyond present demands and current work structures. Long-term involvement in combination with accessible tools is central for creating a participatory setting where the practitioners' knowledge guides the process. © 2014 Vendome Group, LLC.
Optimal visual-haptic integration with articulated tools.
Takahashi, Chie; Watt, Simon J
2017-05-01
When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.
NASA Astrophysics Data System (ADS)
Cedilnik, Andrej; Baumes, Jeffrey; Ibanez, Luis; Megason, Sean; Wylie, Brian
2008-01-01
Dramatic technological advances in the field of genomics have made it possible to sequence the complete genomes of many different organisms. With this overwhelming amount of data at hand, biologists are now confronted with the challenge of understanding the function of the many different elements of the genome. One of the best places to start gaining insight on the mechanisms by which the genome controls an organism is the study of embryogenesis. There are multiple and inter-related layers of information that must be established in order to understand how the genome controls the formation of an organism. One is cell lineage which describes how patterns of cell division give rise to different parts of an organism. Another is gene expression which describes when and where different genes are turned on. Both of these data types can now be acquired using fluorescent laser-scanning (confocal or 2-photon) microscopy of embryos tagged with fluorescent proteins to generate 3D movies of developing embryos. However, analyzing the wealth of resulting images requires tools capable of interactively visualizing several different types of information as well as being scalable to terabytes of data. This paper describes how the combination of existing large data volume visualization and the new Titan information visualization framework of the Visualization Toolkit (VTK) can be applied to the problem of studying the cell lineage of an organism. In particular, by linking the visualization of spatial and temporal gene expression data with novel ways of visualizing cell lineage data, users can study how the genome regulates different aspects of embryonic development.
Castaño-Díez, Daniel; Kudryashev, Mikhail; Stahlberg, Henning
2017-02-01
Cryo electron tomography allows macromolecular complexes within vitrified, intact, thin cells or sections thereof to be visualized, and structural analysis to be performed in situ by averaging over multiple copies of the same molecules. Image processing for subtomogram averaging is specific and cumbersome, due to the large amount of data and its three dimensional nature and anisotropic resolution. Here, we streamline data processing for subtomogram averaging by introducing an archiving system, Dynamo Catalogue. This system manages tomographic data from multiple tomograms and allows visual feedback during all processing steps, including particle picking, extraction, alignment and classification. The file structure of a processing project file structure includes logfiles of performed operations, and can be backed up and shared between users. Command line commands, database queries and a set of GUIs give the user versatile control over the process. Here, we introduce a set of geometric tools that streamline particle picking from simple (filaments, spheres, tubes, vesicles) and complex geometries (arbitrary 2D surfaces, rare instances on proteins with geometric restrictions, and 2D and 3D crystals). Advanced functionality, such as manual alignment and subboxing, is useful when initial templates are generated for alignment and for project customization. Dynamo Catalogue is part of the open source package Dynamo and includes tools to ensure format compatibility with the subtomogram averaging functionalities of other packages, such as Jsubtomo, PyTom, PEET, EMAN2, XMIPP and Relion. Copyright © 2016. Published by Elsevier Inc.
OpenGl Visualization Tool and Library Version: 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2010-06-22
GLVis is an OpenGL tool for visualization of finite element meshes and functions. When started without any options, GLVis starts a server, which waits for a socket connections and visualizes any recieved data. This way the results of simulations on a remote (parallel) machine can be visualized on the lical user desktop. GLVis can also be used to visualize a mesh with or without a finite element function (solution). It can run a batch sequence of commands (GLVis scripts), or display previously saved socket streams.
Software attribute visualization for high integrity software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollock, G.M.
1998-03-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.
NASA Astrophysics Data System (ADS)
Rohini, B. S.; Nagabhushana, H.; Darshan, G. P.; Basavaraj, R. B.; Sharma, S. C.; Sudarmani, R.
2017-11-01
In Forensic investigation, identification of various types of ridge details are essential in order to fix the criminals associated in various crimes. Even though several methods and labeling agents are available to visualize latent finger prints (LFPs) there is still simple, accurate, cost-effective, and non-destructive tool is required. In the present work, CeO2 nanopowders (NPs) are prepared via simple solution combustion route using Tamarindus indica fruit extract as a fuel. The optimized NPs are utilized for visualization of LFPs on various surfaces by powder dusting method. Results revealed that visualized LFPs exhibit Level 3 features such as pores and ridge contours under normal light with high sensitivity and without background hindrance. The photometric characteristics of the prepared samples exhibit blue color emission and highly useful in warm light emitting diodes. The photocatalytic studies were carried out with different Methylene blue (MB) dye concentration and pH values. The obtained results reveal that the CeO2 NPs exhibits an excellent catalytic properties which can act as a good catalytic reagent. The findings demonstrate that the prepared NPs are quite useful as a labeling agent for visualization of LFPs, efficient catalysts for dye degradation as well as solid-state lighting applications.
A neotropical Miocene pollen database employing image-based search and semantic modeling1
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren
2014-01-01
• Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648
Subsurface data visualization in Virtual Reality
NASA Astrophysics Data System (ADS)
Krijnen, Robbert; Smelik, Ruben; Appleton, Rick; van Maanen, Peter-Paul
2017-04-01
Due to their increasing complexity and size, visualization of geological data is becoming more and more important. It enables detailed examining and reviewing of large volumes of geological data and it is often used as a communication tool for reporting and education to demonstrate the importance of the geology to policy makers. In the Netherlands two types of nation-wide geological models are available: 1) Layer-based models in which the subsurface is represented by a series of tops and bases of geological or hydrogeological units, and 2) Voxel models in which the subsurface is subdivided in a regular grid of voxels that can contain different properties per voxel. The Geological Survey of the Netherlands (GSN) provides an interactive web portal that delivers maps and vertical cross-sections of such layer-based and voxel models. From this portal you can download a 3D subsurface viewer that can visualize the voxel model data of an area of 20 × 25 km with 100 × 100 × 5 meter voxel resolution on a desktop computer. Virtual Reality (VR) technology enables us to enhance the visualization of this volumetric data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of-the-shelf VR hardware enabled us to create an new intuitive and low visualization tool. A VR viewer has been implemented using the HTC Vive head set and allows visualization and analysis of the GSN voxel model data with geological or hydrogeological units. The user can navigate freely around the voxel data (20 × 25 km) which is presented in a virtual room at a scale of 2 × 2 or 3 × 3 meters. To enable analysis, e.g. hydraulic conductivity, the user can select filters to remove specific hydrogeological units. The user can also use slicing to cut-off specific sections of the voxel data to get a closer look. This slicing can be done in any direction using a 'virtual knife'. Future plans are to further improve performance from 30 up to 90 Hz update rate to reduce possible motion sickness, add more advanced filtering capabilities as well as a multi user setup, annotation capabilities and visualizing of historical data.
NASA Astrophysics Data System (ADS)
Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel
2017-03-01
Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.
VisAdapt: A Visualization Tool to Support Climate Change Adaptation.
Johansson, Jimmy; Opach, Tomasz; Glaas, Erik; Neset, Tina-Simone; Navarra, Carlo; Linner, Bjorn-Ola; Rod, Jan Ketil
2017-01-01
The web-based visualization VisAdapt tool was developed to help laypeople in the Nordic countries assess how anticipated climate change will impact their homes. The tool guides users through a three-step visual process that helps them explore risks and identify adaptive actions specifically modified to their location and house type. This article walks through the tool's multistep, user-centered design process. Although VisAdapt's target end users are Nordic homeowners, the insights gained from the development process and the lessons learned from the project are applicable to a wide range of domains.
Akuna: An Open Source User Environment for Managing Subsurface Simulation Workflows
NASA Astrophysics Data System (ADS)
Freedman, V. L.; Agarwal, D.; Bensema, K.; Finsterle, S.; Gable, C. W.; Keating, E. H.; Krishnan, H.; Lansing, C.; Moeglein, W.; Pau, G. S. H.; Porter, E.; Scheibe, T. D.
2014-12-01
The U.S. Department of Energy (DOE) is investing in development of a numerical modeling toolset called ASCEM (Advanced Simulation Capability for Environmental Management) to support modeling analyses at legacy waste sites. ASCEM is an open source and modular computing framework that incorporates new advances and tools for predicting contaminant fate and transport in natural and engineered systems. The ASCEM toolset includes both a Platform with Integrated Toolsets (called Akuna) and a High-Performance Computing multi-process simulator (called Amanzi). The focus of this presentation is on Akuna, an open-source user environment that manages subsurface simulation workflows and associated data and metadata. In this presentation, key elements of Akuna are demonstrated, which includes toolsets for model setup, database management, sensitivity analysis, parameter estimation, uncertainty quantification, and visualization of both model setup and simulation results. A key component of the workflow is in the automated job launching and monitoring capabilities, which allow a user to submit and monitor simulation runs on high-performance, parallel computers. Visualization of large outputs can also be performed without moving data back to local resources. These capabilities make high-performance computing accessible to the users who might not be familiar with batch queue systems and usage protocols on different supercomputers and clusters.
Visualization of Biosurfactant Film Flow in a Bacillus subtilis Swarm Colony on an Agar Plate
Kim, Kyunghoon; Kim, Jung Kyung
2015-01-01
Collective bacterial dynamics plays a crucial role in colony development. Although many research groups have studied the behavior of fluidic swarm colonies, the detailed mechanics of its motion remains elusive. Here, we developed a visualization method using submicron fluorescent beads for investigating the flow field in a thin layer of fluid that covers a Bacillus subtilis swarm colony growing on an agar plate. The beads were initially embedded in the agar plate and subsequently distributed spontaneously at the upper surface of the expanding colony. We conducted long-term live cell imaging of the B. subtilis colony using the fluorescent tracers, and obtained high-resolution velocity maps of microscale vortices in the swarm colony using particle image velocimetry. A distinct periodic fluctuation in the average speed and vorticity of flow in swarm colony was observed at the inner region of the colony, and correlated with the switch between bacterial swarming and growth phases. At the advancing edge of the colony, both the magnitudes of velocity and vorticity of flow in swarm colony were inversely correlated with the spreading speed of the swarm edge. The advanced imaging tool developed in this study would facilitate further understanding of the effect of micro vortices in swarm colony on the collective dynamics of bacteria. PMID:26343634
Visualization of Biosurfactant Film Flow in a Bacillus subtilis Swarm Colony on an Agar Plate.
Kim, Kyunghoon; Kim, Jung Kyung
2015-08-26
Collective bacterial dynamics plays a crucial role in colony development. Although many research groups have studied the behavior of fluidic swarm colonies, the detailed mechanics of its motion remains elusive. Here, we developed a visualization method using submicron fluorescent beads for investigating the flow field in a thin layer of fluid that covers a Bacillus subtilis swarm colony growing on an agar plate. The beads were initially embedded in the agar plate and subsequently distributed spontaneously at the upper surface of the expanding colony. We conducted long-term live cell imaging of the B. subtilis colony using the fluorescent tracers, and obtained high-resolution velocity maps of microscale vortices in the swarm colony using particle image velocimetry. A distinct periodic fluctuation in the average speed and vorticity of flow in swarm colony was observed at the inner region of the colony, and correlated with the switch between bacterial swarming and growth phases. At the advancing edge of the colony, both the magnitudes of velocity and vorticity of flow in swarm colony were inversely correlated with the spreading speed of the swarm edge. The advanced imaging tool developed in this study would facilitate further understanding of the effect of micro vortices in swarm colony on the collective dynamics of bacteria.
Graham, N.; Zeman, A.; Young, A.; Patterson, K.; Hodges, J.
1999-01-01
OBJECTIVES—To investigate the roles of visual and tactile information in a dyspraxic patient with corticobasal degeneration (CBD) who showed dramatic facilitation in miming the use of a tool or object when he was given a tool to manipulate; and to study the nature of the praxic and neuropsychological deficits in CBD. METHODS—The subject had clinically diagnosed CBD, and exhibited alien limb behaviour and striking ideomotor dyspraxia. General neuropsychological evaluation focused on constructional and visuospatial abilities, calculation, verbal fluency, episodic and semantic memory, plus spelling and writing because impairments in this domain were presenting complaints. Four experiments assessed the roles of visual and tactile information in the facilitation of motor performance by tools. Experiment 1 evaluated the patient's performance of six limb transitive actions under six conditions: (1) after he described the relevant tool from memory, (2) after he was shown a line drawing of the tool, (3) after he was shown a real exemplar of the tool, (4) after he watched the experimenter perform the action, (5) while he was holding the tool, and (6) immediately after he had performed the action with the tool but with the tool removed from his grasp. Experiment 2 evaluated the use of the same six tools when the patient had tactile but no visual information (while he was blindfolded). Experiments 3 and 4 assessed performance of actions appropriate to the same six tools when the patient had either neutral or inappropriate tactile feedback—that is, while he was holding a non-tool object or a different tool. RESULTS—Miming of tool use was not facilitated by visual input; moreover, lack of visual information in the blindfolded condition did not reduce performance. The principal positive finding was a dramatic facilitation of the patient's ability to demonstrate object use when he was holding either the appropriate tool or a neutral object. Tools inappropriate to the requested action produced involuntary performance of the stimulus relevant action. CONCLUSIONS—Tactile stimulation was paramount in the facilitation of motor performance in tool use by this patient with CBD. This outcome suggests that tactile information should be included in models which hypothesise modality specific inputs to the action production system. Significant impairments in spelling and letter production that have not previously been reported in CBD have also been documented. PMID:10449556
Proteopedia: Exciting Advances in the 3D Encyclopedia of Biomolecular Structure
NASA Astrophysics Data System (ADS)
Prilusky, Jaime; Hodis, Eran; Sussman, Joel L.
Proteopedia is a collaborative, 3D web-encyclopedia of protein, nucleic acid and other structures. Proteopedia ( http://www.proteopedia.org ) presents 3D biomolecule structures in a broadly accessible manner to a diverse scientific audience through easy-to-use molecular visualization tools integrated into a wiki environment that anyone with a user account can edit. We describe recent advances in the web resource in the areas of content and software. In terms of content, we describe a large growth in user-added content as well as improvements in automatically-generated content for all PDB entry pages in the resource. In terms of software, we describe new features ranging from the capability to create pages hidden from public view to the capability to export pages for offline viewing. New software features also include an improved file-handling system and availability of biological assemblies of protein structures alongside their asymmetric units.
Biomolecular Imaging with Coherent Nonlinear Vibrational Microscopy
Chung, Chao-Yu; Boik, John; Potma, Eric O.
2014-01-01
Optical imaging with spectroscopic vibrational contrast is a label-free solution for visualizing, identifying, and quantifying a wide range of biomolecular compounds in biological materials. Both linear and nonlinear vibrational microscopy techniques derive their imaging contrast from infrared active or Raman allowed molecular transitions, which provide a rich palette for interrogating chemical and structural details of the sample. Yet nonlinear optical methods, which include both second-order sum-frequency generation (SFG) and third-order coherent Raman scattering (CRS) techniques, offer several improved imaging capabilities over their linear precursors. Nonlinear vibrational microscopy features unprecedented vibrational imaging speeds, provides strategies for higher spatial resolution, and gives access to additional molecular parameters. These advances have turned vibrational microscopy into a premier tool for chemically dissecting live cells and tissues. This review discusses the molecular contrast of SFG and CRS microscopy and highlights several of the advanced imaging capabilities that have impacted biological and biomedical research. PMID:23245525
Application of Genomic In Situ Hybridization in Horticultural Science
Ramzan, Fahad; Lim, Ki-Byung
2017-01-01
Molecular cytogenetic techniques, such as in situ hybridization methods, are admirable tools to analyze the genomic structure and function, chromosome constituents, recombination patterns, alien gene introgression, genome evolution, aneuploidy, and polyploidy and also genome constitution visualization and chromosome discrimination from different genomes in allopolyploids of various horticultural crops. Using GISH advancement as multicolor detection is a significant approach to analyze the small and numerous chromosomes in fruit species, for example, Diospyros hybrids. This analytical technique has proved to be the most exact and effective way for hybrid status confirmation and helps remarkably to distinguish donor parental genomes in hybrids such as Clivia, Rhododendron, and Lycoris ornamental hybrids. The genome characterization facilitates in hybrid selection having potential desirable characteristics during the early hybridization breeding, as this technique expedites to detect introgressed sequence chromosomes. This review study epitomizes applications and advancements of genomic in situ hybridization (GISH) techniques in horticultural plants. PMID:28459054
The Mission Planning Lab: A Visualization and Analysis Tool
NASA Technical Reports Server (NTRS)
Daugherty, Sarah C.; Cervantes, Benjamin W.
2009-01-01
Simulation and visualization are powerful decision making tools that are time-saving and cost-effective. Space missions pose testing and e valuation challenges that can be overcome through modeling, simulatio n, and visualization of mission parameters. The National Aeronautics and Space Administration?s (NASA) Wallops Flight Facility (WFF) capi talizes on the benefits of modeling, simulation, and visualization to ols through a project initiative called The Mission Planning Lab (MPL ).
McNally, Colin P.; Eng, Alexander; Noecker, Cecilia; Gagne-Maynard, William C.; Borenstein, Elhanan
2018-01-01
The abundance of both taxonomic groups and gene categories in microbiome samples can now be easily assayed via various sequencing technologies, and visualized using a variety of software tools. However, the assemblage of taxa in the microbiome and its gene content are clearly linked, and tools for visualizing the relationship between these two facets of microbiome composition and for facilitating exploratory analysis of their co-variation are lacking. Here we introduce BURRITO, a web tool for interactive visualization of microbiome multi-omic data with paired taxonomic and functional information. BURRITO simultaneously visualizes the taxonomic and functional compositions of multiple samples and dynamically highlights relationships between taxa and functions to capture the underlying structure of these data. Users can browse for taxa and functions of interest and interactively explore the share of each function attributed to each taxon across samples. BURRITO supports multiple input formats for taxonomic and metagenomic data, allows adjustment of data granularity, and can export generated visualizations as static publication-ready formatted figures. In this paper, we describe the functionality of BURRITO, and provide illustrative examples of its utility for visualizing various trends in the relationship between the composition of taxa and functions in complex microbiomes. PMID:29545787
A validated set of tool pictures with matched objects and non-objects for laterality research.
Verma, Ark; Brysbaert, Marc
2015-01-01
Neuropsychological and neuroimaging research has established that knowledge related to tool use and tool recognition is lateralized to the left cerebral hemisphere. Recently, behavioural studies with the visual half-field technique have confirmed the lateralization. A limitation of this research was that different sets of stimuli had to be used for the comparison of tools to other objects and objects to non-objects. Therefore, we developed a new set of stimuli containing matched triplets of tools, other objects and non-objects. With the new stimulus set, we successfully replicated the findings of no visual field advantage for objects in an object recognition task combined with a significant right visual field advantage for tools in a tool recognition task. The set of stimuli is available as supplemental data to this article.
Application of Multimedia Design Principles to Visuals Used in Course-Books: An Evaluation Tool
ERIC Educational Resources Information Center
Kuzu, Abdullah; Akbulut, Yavuz; Sahin, Mehmet Can
2007-01-01
This paper introduces an evaluation tool prepared to examine the quality of visuals in course-books. The tool is based on Mayer's Cognitive Theory of Multimedia Learning (i.e. Generative Theory) and its principles regarding the correct use of illustrations within text. The reason to generate the tool, the development process along with the…
Cytoscape: the network visualization tool for GenomeSpace workflows.
Demchak, Barry; Hull, Tim; Reich, Michael; Liefeld, Ted; Smoot, Michael; Ideker, Trey; Mesirov, Jill P
2014-01-01
Modern genomic analysis often requires workflows incorporating multiple best-of-breed tools. GenomeSpace is a web-based visual workbench that combines a selection of these tools with mechanisms that create data flows between them. One such tool is Cytoscape 3, a popular application that enables analysis and visualization of graph-oriented genomic networks. As Cytoscape runs on the desktop, and not in a web browser, integrating it into GenomeSpace required special care in creating a seamless user experience and enabling appropriate data flows. In this paper, we present the design and operation of the Cytoscape GenomeSpace app, which accomplishes this integration, thereby providing critical analysis and visualization functionality for GenomeSpace users. It has been downloaded over 850 times since the release of its first version in September, 2013.
Cytoscape: the network visualization tool for GenomeSpace workflows
Demchak, Barry; Hull, Tim; Reich, Michael; Liefeld, Ted; Smoot, Michael; Ideker, Trey; Mesirov, Jill P.
2014-01-01
Modern genomic analysis often requires workflows incorporating multiple best-of-breed tools. GenomeSpace is a web-based visual workbench that combines a selection of these tools with mechanisms that create data flows between them. One such tool is Cytoscape 3, a popular application that enables analysis and visualization of graph-oriented genomic networks. As Cytoscape runs on the desktop, and not in a web browser, integrating it into GenomeSpace required special care in creating a seamless user experience and enabling appropriate data flows. In this paper, we present the design and operation of the Cytoscape GenomeSpace app, which accomplishes this integration, thereby providing critical analysis and visualization functionality for GenomeSpace users. It has been downloaded over 850 times since the release of its first version in September, 2013. PMID:25165537
Kunimatsu-Sanuki, Shiho; Iwase, Aiko; Araie, Makoto; Aoki, Yuki; Hara, Takeshi; Fukuchi, Takeo; Udagawa, Sachiko; Ohkubo, Shinji; Sugiyama, Kazuhisa; Matsumoto, Chota; Nakazawa, Toru; Yamaguchi, Takuhiro; Ono, Hiroshi
2017-01-01
Background/aims To assess the role of specific visual subfields in collisions with oncoming cars during simulated driving in patients with advanced glaucoma. Methods Normal subjects and patients with glaucoma with mean deviation <–12 dB in both eyes (Humphrey Field Analyzer 24-2 SITA-S program) used a driving simulator (DS; Honda Motor, Tokyo). Two scenarios in which oncoming cars turned right crossing the driver's path were chosen. We compared the binocular integrated visual field (IVF) in the patients who were involved in collisions and those who were not. We performed a multivariate logistic regression analysis; the dependent parameter was collision involvement, and the independent parameters were age, visual acuity and mean sensitivity of the IVF subfields. Results The study included 43 normal subjects and 100 patients with advanced glaucoma. And, 5 of the 100 patients with advanced glaucoma experienced simulator sickness during the main test and were thus excluded. In total, 95 patients with advanced glaucoma and 43 normal subjects completed the main test of DS. Advanced glaucoma patients had significantly more collisions than normal patients in one or both DS scenarios (p<0.001). The patients with advanced glaucoma who were involved in collisions were older (p=0.050) and had worse visual acuity in the better eye (p<0.001) and had lower mean IVF sensitivity in the inferior hemifield, both 0°–12° and 13°–24° in comparison with who were not involved in collisions (p=0.012 and p=0.034). A logistic regression analysis revealed that collision involvement was significantly associated with decreased inferior IVF mean sensitivity from 13° to 24° (p=0.041), in addition to older age and lower visual acuity (p=0.018 and p<0.001). Conclusions Our data suggest that the inferior hemifield was associated with the incidence of motor vehicle collisions with oncoming cars in patients with advanced glaucoma. PMID:28400370
The Use of Visual Advance Organizers for Learning Earth Science Concepts.
ERIC Educational Resources Information Center
Weisberg, Joseph S.
This study was designed to determine whether advance organizers in the form of visual aids might serve the same function as Ausubel's verbal advance organizers. The basic design of the study consisted of a 4 X 3 X 2 ANOVA factorial design. Ninety-six eighth-grade students were involved in the study. One group was exposed to a physiographic diagram…
Belle2VR: A Virtual-Reality Visualization of Subatomic Particle Physics in the Belle II Experiment.
Duer, Zach; Piilonen, Leo; Glasson, George
2018-05-01
Belle2VR is an interactive virtual-reality visualization of subatomic particle physics, designed by an interdisciplinary team as an educational tool for learning about and exploring subatomic particle collisions. This article describes the tool, discusses visualization design decisions, and outlines our process for collaborative development.
Learner-Information Interaction: A Macro-Level Framework Characterizing Visual Cognitive Tools
ERIC Educational Resources Information Center
Sedig, Kamran; Liang, Hai-Ning
2008-01-01
Visual cognitive tools (VCTs) are external mental aids that maintain and display visual representations (VRs) of information (i.e., structures, objects, concepts, ideas, and problems). VCTs allow learners to operate upon the VRs to perform epistemic (i.e., reasoning and knowledge-based) activities. In VCTs, the mechanism by which learners operate…
Visual Data Comm: A Tool for Visualizing Data Communication in the Multi Sector Planner Study
NASA Technical Reports Server (NTRS)
Lee, Hwasoo Eric
2010-01-01
Data comm is a new technology proposed in future air transport system as a potential tool to provide comprehensive data connectivity. It is a key enabler to manage 4D trajectory digitally, potentially resulting in improved flight times and increased throughput. Future concepts with data comm integration have been tested in a number of human-in-the-loop studies but analyzing the results has proven to be particularly challenging because future traffic environment in which data comm is fully enabled has assumed high traffic density, resulting in data set with large amount of information. This paper describes the motivation, design, current and potential future application of Visual Data Comm (VDC), a tool for visualizing data developed in Java using Processing library which is a tool package designed for interactive visualization programming. This paper includes an example of an application of VDC on data pertaining to the most recent Multi Sector Planner study, conducted at NASA s Airspace Operations Laboratory in 2009, in which VDC was used to visualize and interpret data comm activities
Recent advances in 3D printing of biomaterials.
Chia, Helena N; Wu, Benjamin M
2015-01-01
3D Printing promises to produce complex biomedical devices according to computer design using patient-specific anatomical data. Since its initial use as pre-surgical visualization models and tooling molds, 3D Printing has slowly evolved to create one-of-a-kind devices, implants, scaffolds for tissue engineering, diagnostic platforms, and drug delivery systems. Fueled by the recent explosion in public interest and access to affordable printers, there is renewed interest to combine stem cells with custom 3D scaffolds for personalized regenerative medicine. Before 3D Printing can be used routinely for the regeneration of complex tissues (e.g. bone, cartilage, muscles, vessels, nerves in the craniomaxillofacial complex), and complex organs with intricate 3D microarchitecture (e.g. liver, lymphoid organs), several technological limitations must be addressed. In this review, the major materials and technology advances within the last five years for each of the common 3D Printing technologies (Three Dimensional Printing, Fused Deposition Modeling, Selective Laser Sintering, Stereolithography, and 3D Plotting/Direct-Write/Bioprinting) are described. Examples are highlighted to illustrate progress of each technology in tissue engineering, and key limitations are identified to motivate future research and advance this fascinating field of advanced manufacturing.
New advances in probing cell–extracellular matrix interactions
2017-01-01
The extracellular matrix (ECM) provides structural and biochemical support to cells within tissues. An emerging body of evidence has established that the ECM plays a key role in cell mechanotransduction – the study of coupling between mechanical inputs and cellular phenotype – through either mediating transmission of forces to the cells, or presenting mechanical cues that guide cellular behaviors. Recent progress in cell mechanotransduction research has been facilitated by advances of experimental tools, particularly microtechnologies, engineered biomaterials, and imaging and analytical methods. Microtechnologies have enabled the design and fabrication of controlled physical microenvironments for the study and measurement of cell–ECM interactions. Advances in engineered biomaterials have allowed researchers to develop synthetic ECMs that mimic tissue microenvironments and investigate the impact of altered physicochemical properties on various cellular processes. Finally, advanced imaging and spectroscopy techniques have facilitated the visualization of the complex interaction between cells and ECM in vitro and in living tissues. This review will highlight the application of recent innovations in these areas to probing cell–ECM interactions. We believe cross-disciplinary approaches, combining aspects of the different technologies reviewed here, will inspire innovative ideas to further elucidate the secrets of ECM-mediated cell control. PMID:28352896
Scientific Visualization Made Easy for the Scientist
NASA Astrophysics Data System (ADS)
Westerhoff, M.; Henderson, B.
2002-12-01
amirar is an application program used in creating 3D visualizations and geometric models of 3D image data sets from various application areas, e.g. medicine, biology, biochemistry, chemistry, physics, and engineering. It has demonstrated significant adoption in the market place since becoming commercially available in 2000. The rapid adoption has expanded the features being requested by the user base and broadened the scope of the amira product offering. The amira product offering includes amira Standard, amiraDevT, used to extend the product capabilities by users, amiraMolT, used for molecular visualization, amiraDeconvT, used to improve quality of image data, and amiraVRT, used in immersive VR environments. amira allows the user to construct a visualization tailored to his or her needs without requiring any programming knowledge. It also allows 3D objects to be represented as grids suitable for numerical simulations, notably as triangular surfaces and volumetric tetrahedral grids. The amira application also provides methods to generate such grids from voxel data representing an image volume, and it includes a general-purpose interactive 3D viewer. amiraDev provides an application-programming interface (API) that allows the user to add new components by C++ programming. amira supports many import formats including a 'raw' format allowing immediate access to your native uniform data sets. amira uses the power and speed of the OpenGLr and Open InventorT graphics libraries and 3D graphics accelerators to allow you to access over 145 modules, enabling you to process, probe, analyze and visualize your data. The amiraMolT extension adds powerful tools for molecular visualization to the existing amira platform. amiraMolT contains support for standard molecular file formats, tools for visualization and analysis of static molecules as well as molecular trajectories (time series). amiraDeconv adds tools for the deconvolution of 3D microscopic images. Deconvolution is the process of increasing image quality and resolution by computationally compensating artifacts of the recording process. amiraDeconv supports 3D wide field microscopy as well as 3D confocal microscopy. It offers both non-blind and blind image deconvolution algorithms. Non-blind deconvolution uses an individual measured point spread function, while non-blind algorithms work on the basis of only a few recording parameters (like numerical aperture or zoom factor). amiraVR is a specialized and extended version of the amira visualization system which is dedicated for use in immersive installations, such as large-screen stereoscopic projections, CAVEr or Holobenchr systems. Among others, it supports multi-threaded multi-pipe rendering, head-tracking, advanced 3D interaction concepts, and 3D menus allowing interaction with any amira object in the same way as on the desktop. With its unique set of features, amiraVR represents both a VR (Virtual Reality) ready application for scientific and medical visualization in immersive environments, and a development platform that allows building VR applications.
Molecular and Cellular Biology Animations: Development and Impact on Student Learning
2005-01-01
Educators often struggle when teaching cellular and molecular processes because typically they have only two-dimensional tools to teach something that plays out in four dimensions. Learning research has demonstrated that visualizing processes in three dimensions aids learning, and animations are effective visualization tools for novice learners and aid with long-term memory retention. The World Wide Web Instructional Committee at North Dakota State University has used these research results as an inspiration to develop a suite of high-quality animations of molecular and cellular processes. Currently, these animations represent transcription, translation, bacterial gene expression, messenger RNA (mRNA) processing, mRNA splicing, protein transport into an organelle, the electron transport chain, and the use of a biological gradient to drive adenosine triphosphate synthesis. These animations are integrated with an educational module that consists of First Look and Advanced Look components that feature captioned stills from the animation representing the key steps in the processes at varying levels of complexity. These animation-based educational modules are available via the World Wide Web at http://vcell.ndsu.edu/animations. An in-class research experiment demonstrated that student retention of content material was significantly better when students received a lecture coupled with the animations and then used the animation as an individual study activity. PMID:15917875