Visual and Experiential Learning Opportunities through Geospatial Data
NASA Astrophysics Data System (ADS)
Gardiner, N.; Bulletins, S.
2007-12-01
Global observation data from satellites are essential for both research and education about Earth's climate because they help convey the temporal and spatial scales inherent to the subject, which are beyond most people's experience. Experts in the development of visualizations using spatial data distinguish the process of learning through data exploration from the process of learning by absorbing a story told from beginning to end. The former requires the viewer to absorb complex spatial and temporal dynamics inherent to visualized data and therefore is a process best undertaken by those familiar with the data and processes represented. The latter requires that the viewer understand the intended presentation of concepts, so story telling can be employed to educate viewers with varying backgrounds and familiarity with a given subject. Three examples of climate science education, drawn from the current science program Science Bulletins (American Museum of Natural History, New York, USA), demonstrate the power of visualized global earth observations for climate science education. The first example seeks to explain the potential for sea level rise on a global basis. A short feature film includes the visualized, projected effects of sea level rise at local to global scales; this visualization complements laboratory and field observations of glacier retreat and paleoclimatic reconstructions based on fossilized coral reef analysis, each of which is also depicted in the film. The narrative structure keeps learners focused on discrete scientific concepts. The second example utilizes half-hourly cloud observations to demonstrate weather and climate patterns to audiences on a global basis. Here, the scientific messages are qualitatively simpler, but the viewer must deduce his own complex visual understanding of the visualized data. Finally, we present plans for distributing climate science education products via mediated public events whereby participants learn from climate and geovisualization experts working collaboratively. This last example provides an opportunity for deep exploration of patterns and processes in a live setting and makes full use of complementary talents, including computer science, internet-enabled data sharing, remote sensing image processing, and meteorology. These innovative examples from informal educators serve as powerful pedagogical models to consider for the classroom of the future.
1992-04-07
reflected light seen by the viewer does not depend on the viewer’s position. Such surfaces are dull or matte and the luminance of the diffuse reflected light...vegetation and reflect only the skylight . Generally, the reflectance of the ambient light is approximately represented as a global value, constant over all the...allowing the ambient contribution provided by skylight to vary with the orientation of the surface relative to zenith. This approximation takes into
Cartograms Facilitate Communication of Climate Change Risks and Responsibilities
NASA Astrophysics Data System (ADS)
Döll, Petra
2017-12-01
Communication of climate change (CC) risks is challenging, in particular if global-scale spatially resolved quantitative information is to be conveyed. Typically, visualization of CC risks, which arise from the combination of hazard, exposure and vulnerability, is confined to showing only the hazards in the form of global thematic maps. This paper explores the potential of contiguous value-by-area cartograms, that is, distorted density-equalizing maps, for improving communication of CC risks and the countries' differentiated responsibilities for CC. Two global-scale cartogram sets visualize, as an example, groundwater-related CC risks in 0.5° grid cells, another one the correlation of (cumulative) fossil-fuel carbon dioxide emissions with the countries' population and gross domestic product. Viewers of the latter set visually recognize the lack of global equity and that the countries' wealth has been built on harmful emissions. I recommend that CC risks are communicated by bivariate gridded cartograms showing the hazard in color and population, or a combination of population and a vulnerability indicator, by distortion of grid cells. Gridded cartograms are also appropriate for visualizing the availability of natural resources to humans. For communicating complex information, sets of cartograms should be carefully designed instead of presenting single cartograms. Inclusion of a conventionally distorted map enhances the viewers' capability to take up the information represented by distortion. Empirical studies about the capability of global cartograms to convey complex information and to trigger moral emotions should be conducted, with a special focus on risk communication.
NREL: Renewable Resource Data Center - Solar Resource Models and Tools
Solar Resource Models and Tools The Renewable Resource Data Center (RReDC) features the following -supplied hourly average measured global horizontal data. NSRDB Data Viewer Visualize, explore, and download solar resource data from the National Solar Radiation Database. PVWatts® Calculator PVWattsÂ
A Visual Test for Visual "Literacy."
ERIC Educational Resources Information Center
Messaris, Paul
Four different principles of visual manipulation constitute a minimal list of what a visually "literate" viewer should know about, but certain problems exist which are inherent in measuring viewers' awareness of each of them. The four principles are: (1) paraproxemics, or camera work which derives its effectiveness from an analogy to the…
NGL Viewer: a web application for molecular visualization
Rose, Alexander S.; Hildebrand, Peter W.
2015-01-01
The NGL Viewer (http://proteinformatics.charite.de/ngl) is a web application for the visualization of macromolecular structures. By fully adopting capabilities of modern web browsers, such as WebGL, for molecular graphics, the viewer can interactively display large molecular complexes and is also unaffected by the retirement of third-party plug-ins like Flash and Java Applets. Generally, the web application offers comprehensive molecular visualization through a graphical user interface so that life scientists can easily access and profit from available structural data. It supports common structural file-formats (e.g. PDB, mmCIF) and a variety of molecular representations (e.g. ‘cartoon, spacefill, licorice’). Moreover, the viewer can be embedded in other web sites to provide specialized visualizations of entries in structural databases or results of structure-related calculations. PMID:25925569
SAKURA-viewer: intelligent order history viewer based on two-viewpoint architecture.
Toyoda, Shuichi; Niki, Noboru; Nishitani, Hiromu
2007-03-01
We propose a new intelligent order history viewer applied to consolidating and visualizing data. SAKURA-viewer is a highly effective tool, as: 1) it visualizes both the semantic viewpoint and the temporal viewpoint of patient records simultaneously; 2) it promotes awareness of contextual information among the daily data; and 3) it implements patient-centric data entry methods. This viewer contributes to decrease the user's workload in an order entry system. This viewer is now incorporated into an order entry system being run on an experimental basis. We describe the evaluation of this system using results of a user satisfaction survey, analysis of information consolidation within the database, and analysis of the frequency of use of data entry methods.
NGL Viewer: a web application for molecular visualization.
Rose, Alexander S; Hildebrand, Peter W
2015-07-01
The NGL Viewer (http://proteinformatics.charite.de/ngl) is a web application for the visualization of macromolecular structures. By fully adopting capabilities of modern web browsers, such as WebGL, for molecular graphics, the viewer can interactively display large molecular complexes and is also unaffected by the retirement of third-party plug-ins like Flash and Java Applets. Generally, the web application offers comprehensive molecular visualization through a graphical user interface so that life scientists can easily access and profit from available structural data. It supports common structural file-formats (e.g. PDB, mmCIF) and a variety of molecular representations (e.g. 'cartoon, spacefill, licorice'). Moreover, the viewer can be embedded in other web sites to provide specialized visualizations of entries in structural databases or results of structure-related calculations. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
ERIC Educational Resources Information Center
Stewig, John Warren
Visual literacy--seeing with insight--enables child viewers of pictures to examine elements such as color, line, shape, form, depth, and detail to see what relations exist both among these components and between what is in the picture and their previous visual experience. The viewer can extract meaning and respond to it, either by talking or…
Sherman, Aleksandra; Grabowecky, Marcia; Suzuki, Satoru
2015-08-01
What shapes art appreciation? Much research has focused on the importance of visual features themselves (e.g., symmetry, natural scene statistics) and of the viewer's experience and expertise with specific artworks. However, even after taking these factors into account, there are considerable individual differences in art preferences. Our new result suggests that art preference is also influenced by the compatibility between visual properties and the characteristics of the viewer's visual system. Specifically, we have demonstrated, using 120 artworks from diverse periods, cultures, genres, and styles, that art appreciation is increased when the level of visual complexity within an artwork is compatible with the viewer's visual working memory capacity. The result highlights the importance of the interaction between visual features and the beholder's general visual capacity in shaping art appreciation. (c) 2015 APA, all rights reserved).
Building Stories about Sea Level Rise through Interactive Visualizations
NASA Astrophysics Data System (ADS)
Stephens, S. H.; DeLorme, D. E.; Hagen, S. C.
2013-12-01
Digital media provide storytellers with dynamic new tools for communicating about scientific issues via interactive narrative visualizations. While traditional storytelling uses plot, characterization, and point of view to engage audiences with underlying themes and messages, interactive visualizations can be described as 'narrative builders' that promote insight through the process of discovery (Dove, G. & Jones, S. 2012, Proc. IHCI 2012). Narrative visualizations are used in online journalism to tell complex stories that allow readers to select aspects of datasets to explore and construct alternative interpretations of information (Segel, E. & Heer, J. 2010, IEEE Trans. Vis. Comp. Graph.16, 1139), thus enabling them to participate in the story-building process. Nevertheless, narrative visualizations also incorporate author-selected narrative elements that help guide and constrain the overall themes and messaging of the visualization (Hullman, J. & Diakopoulos, N. 2011, IEEE Trans. Vis. Comp. Graph. 17, 2231). One specific type of interactive narrative visualization that is used for science communication is the sea level rise (SLR) viewer. SLR viewers generally consist of a base map, upon which projections of sea level rise scenarios can be layered, and various controls for changing the viewpoint and scenario parameters. They are used to communicate the results of scientific modeling and help readers visualize the potential impacts of SLR on the coastal zone. Readers can use SLR viewers to construct personal narratives of the effects of SLR under different scenarios in locations that are important to them, thus extending the potential reach and impact of scientific research. With careful selection of narrative elements that guide reader interpretation, the communicative aspects of these visualizations may be made more effective. This presentation reports the results of a content analysis of a subset of existing SLR viewers selected in order to comprehensively identify and characterize the narrative elements that contribute to this storytelling medium. The results describe four layers of narrative elements in these viewers: data, visual representations, annotations, and interactivity; and explain the ways in which these elements are used to communicate about SLR. Most existing SLR viewers have been designed with attention to technical usability; however, careful design of narrative elements could increase their overall effectiveness as story-building tools. The analysis concludes with recommendations for narrative elements that should be considered when designing new SLR viewers, and offers suggestions for integrating these components to balance author-driven and reader-driven design features for more effective messaging.
Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric
2011-01-01
Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/
Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric
2011-01-01
Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit – a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/ PMID:21713110
Changing viewer perspectives reveals constraints to implicit visual statistical learning.
Jiang, Yuhong V; Swallow, Khena M
2014-10-07
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.
Visualization for genomics: the Microbial Genome Viewer.
Kerkhoven, Robert; van Enckevort, Frank H J; Boekhorst, Jos; Molenaar, Douwe; Siezen, Roland J
2004-07-22
A Web-based visualization tool, the Microbial Genome Viewer, is presented that allows the user to combine complex genomic data in a highly interactive way. This Web tool enables the interactive generation of chromosome wheels and linear genome maps from genome annotation data stored in a MySQL database. The generated images are in scalable vector graphics (SVG) format, which is suitable for creating high-quality scalable images and dynamic Web representations. Gene-related data such as transcriptome and time-course microarray experiments can be superimposed on the maps for visual inspection. The Microbial Genome Viewer 1.0 is freely available at http://www.cmbi.kun.nl/MGV
Houska, Treva R.; Johnson, A.P.
2012-01-01
The Global Visualization Viewer (GloVis) trifold provides basic information for online access to a subset of satellite and aerial photography collections from the U.S. Geological Survey Earth Resources Observation and Science (EROS) Center archive. The GloVis (http://glovis.usgs.gov/) browser-based utility allows users to search and download National Aerial Photography Program (NAPP), National High Altitude Photography (NHAP), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Earth Observing-1 (EO-1), Global Land Survey, Moderate Resolution Imaging Spectroradiometer (MODIS), and TerraLook data. Minimum computer system requirements and customer service contact information also are included in the brochure.
Teaching an Old Client New Tricks - the GloVIS Global Visualization Viewer after 14 Years
NASA Astrophysics Data System (ADS)
Meyer, D. J.; Steinwand, D.; Lemig, K.; Davis, B.; Werpy, J.; Quenzer, R.
2014-12-01
The US Geological Survey's Global Visualization Viewer (GloVIS) is a web-based, visual search and discovery tool used to access imagery from aircraft and space-based imaging systems. GloVIS was introduced shortly after the launch of Landsat 7 to provide a visual client to select images squired by the Enhanced Thematic Mapper Plus. Since then, it has been expanded to search on other Landsat imagery (Multi-spectral Scanner, Thematic Mapper, Operational Land Imager), imagery from a variety of NASA instruments (Moderate Resolution Imaging Spectroradiometer, Advanced Spaceborne Thermal Emissions and Reflection Radiometer, Advanced Land Imager, Hyperion), along with images from high-resolution airborne photography and special collections representing decades-long observations. GloVIS incorporated a number of features considered novel at its original release, such as rapid visual browse, and the ability to use one type of satellite observation (e.g., vegetation seasonality curves derived from the Advanced Very High Resolution Radiometer) to assist in the selection of another (e.g., Landsat). After 14 years, the GloVIS client has gained a large following, having served millions of images to hundreds of thousands of users, but is due for a major re-design. Described here are a set of guiding principles driving the re-design, the methodology used to understand how users discover and retrieve imagery, and candidate technologies to be leveraged in the re-design. The guiding principles include (1) visual co-discovery - the ability to browse and select imagery from diverse sources simultaneously; (2) user-centric design - understanding user needs prior to design and involving users throughout the design process; (3) adaptability - the use of flexible design to permit rapid incorporation of new capabilities, and (4) interoperability - the use of services, conventions and protocols to permit interaction with external sources of Earth science imagery.
HTML5 PivotViewer: high-throughput visualization and querying of image data on the web.
Taylor, Stephen; Noble, Roger
2014-09-15
Visualization and analysis of large numbers of biological images has generated a bottle neck in research. We present HTML5 PivotViewer, a novel, open source, platform-independent viewer making use of the latest web technologies that allows seamless access to images and associated metadata for each image. This provides a powerful method to allow end users to mine their data. Documentation, examples and links to the software are available from http://www.cbrg.ox.ac.uk/data/pivotviewer/. The software is licensed under GPLv2. © The Author 2014. Published by Oxford University Press.
ERIC Educational Resources Information Center
Klein, James D.; And Others
1987-01-01
Reviews study that investigated the interaction between the age of the viewer and the gender of the narrator of a film. Visual attention to the program by second and fifth graders is described, and recall of story ideas as measured by a multiple-choice test is analyzed. (21 references) (LRW)
Robotics and Virtual Reality for Cultural Heritage Digitization and Fruition
NASA Astrophysics Data System (ADS)
Calisi, D.; Cottefoglie, F.; D'Agostini, L.; Giannone, F.; Nenci, F.; Salonia, P.; Zaratti, M.; Ziparo, V. A.
2017-05-01
In this paper we present our novel approach for acquiring and managing digital models of archaeological sites, and the visualization techniques used to showcase them. In particular, we will demonstrate two technologies: our robotic system for digitization of archaeological sites (DigiRo) result of over three years of efforts by a group of cultural heritage experts, computer scientists and roboticists, and our cloud-based archaeological information system (ARIS). Finally we describe the viewers we developed to inspect and navigate the 3D models: a viewer for the web (ROVINA Web Viewer) and an immersive viewer for Virtual Reality (ROVINA VR Viewer).
Foggy perception slows us down.
Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H
2012-10-30
Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog-that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system.DOI:http://dx.doi.org/10.7554/eLife.00031.001.
Remote Viewer for Maritime Robotics Software
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki; Wolf, Michael; Huntsberger, Terrance L.; Howard, Andrew B.
2013-01-01
This software is a viewer program for maritime robotics software that provides a 3D visualization of the boat pose, its position history, ENC (Electrical Nautical Chart) information, camera images, map overlay, and detected tracks.
Reactome diagram viewer: data structures and strategies to boost performance.
Fabregat, Antonio; Sidiropoulos, Konstantinos; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning
2018-04-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. For web-based pathway visualization, Reactome uses a custom pathway diagram viewer that has been evolved over the past years. Here, we present comprehensive enhancements in usability and performance based on extensive usability testing sessions and technology developments, aiming to optimize the viewer towards the needs of the community. The pathway diagram viewer version 3 achieves consistently better performance, loading and rendering of 97% of the diagrams in Reactome in less than 1 s. Combining the multi-layer html5 canvas strategy with a space partitioning data structure minimizes CPU workload, enabling the introduction of new features that further enhance user experience. Through the use of highly optimized data structures and algorithms, Reactome has boosted the performance and usability of the new pathway diagram viewer, providing a robust, scalable and easy-to-integrate solution to pathway visualization. As graph-based visualization of complex data is a frequent challenge in bioinformatics, many of the individual strategies presented here are applicable to a wide range of web-based bioinformatics resources. Reactome is available online at: https://reactome.org. The diagram viewer is part of the Reactome pathway browser (https://reactome.org/PathwayBrowser/) and also available as a stand-alone widget at: https://reactome.org/dev/diagram/. The source code is freely available at: https://github.com/reactome-pwp/diagram. fabregat@ebi.ac.uk or hhe@ebi.ac.uk. Supplementary data are available at Bioinformatics online.
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...
2017-08-29
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
The National Map: New Viewer, Services, and Data Download
Dollison, Robert M.
2010-01-01
Managed by the U.S. Geological Survey's (USGS) National Geospatial Program, The National Map has transitioned data assets and viewer applications to a new visualization and product and service delivery environment, which includes an improved viewing platform, base map data and overlay services, and an integrated data download service. This new viewing solution expands upon the National Geospatial Intelligence Agency (NGA) Palanterra X3 viewer, providing a solid technology foundation for navigation and basic Web mapping functionality. Building upon the NGA viewer allows The National Map to focus on improving data services, functions, and data download capabilities. Initially released to the public at the 125th anniversary of mapping in the USGS on December 3, 2009, the viewer and services are now the primary distribution point for The National Map data. The National Map Viewer: http://viewer.nationalmap.gov
Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration
Thorvaldsdóttir, Helga; Mesirov, Jill P.
2013-01-01
Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today’s sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license. PMID:22517427
Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration.
Thorvaldsdóttir, Helga; Robinson, James T; Mesirov, Jill P
2013-03-01
Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today's sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.
SILVA tree viewer: interactive web browsing of the SILVA phylogenetic guide trees.
Beccati, Alan; Gerken, Jan; Quast, Christian; Yilmaz, Pelin; Glöckner, Frank Oliver
2017-09-30
Phylogenetic trees are an important tool to study the evolutionary relationships among organisms. The huge amount of available taxa poses difficulties in their interactive visualization. This hampers the interaction with the users to provide feedback for the further improvement of the taxonomic framework. The SILVA Tree Viewer is a web application designed for visualizing large phylogenetic trees without requiring the download of any software tool or data files. The SILVA Tree Viewer is based on Web Geographic Information Systems (Web-GIS) technology with a PostgreSQL backend. It enables zoom and pan functionalities similar to Google Maps. The SILVA Tree Viewer enables access to two phylogenetic (guide) trees provided by the SILVA database: the SSU Ref NR99 inferred from high-quality, full-length small subunit sequences, clustered at 99% sequence identity and the LSU Ref inferred from high-quality, full-length large subunit sequences. The Tree Viewer provides tree navigation, search and browse tools as well as an interactive feedback system to collect any kinds of requests ranging from taxonomy to data curation and improving the tool itself.
HTML5 PivotViewer: high-throughput visualization and querying of image data on the web
Taylor, Stephen; Noble, Roger
2014-01-01
Motivation: Visualization and analysis of large numbers of biological images has generated a bottle neck in research. We present HTML5 PivotViewer, a novel, open source, platform-independent viewer making use of the latest web technologies that allows seamless access to images and associated metadata for each image. This provides a powerful method to allow end users to mine their data. Availability and implementation: Documentation, examples and links to the software are available from http://www.cbrg.ox.ac.uk/data/pivotviewer/. The software is licensed under GPLv2. Contact: stephen.taylor@imm.ox.ac.uk and roger@coritsu.com PMID:24849578
Beaver, John E; Bourne, Philip E; Ponomarenko, Julia V
2007-02-21
Structural information about epitopes, particularly the three-dimensional (3D) structures of antigens in complex with immune receptors, presents a valuable source of data for immunology. This information is available in the Protein Data Bank (PDB) and provided in curated form by the Immune Epitope Database and Analysis Resource (IEDB). With continued growth in these data and the importance in understanding molecular level interactions of immunological interest there is a need for new specialized molecular visualization and analysis tools. The EpitopeViewer is a platform-independent Java application for the visualization of the three-dimensional structure and sequence of epitopes and analyses of their interactions with antigen-specific receptors of the immune system (antibodies, T cell receptors and MHC molecules). The viewer renders both 3D views and two-dimensional plots of intermolecular interactions between the antigen and receptor(s) by reading curated data from the IEDB and/or calculated on-the-fly from atom coordinates from the PDB. The 3D views and associated interactions can be saved for future use and publication. The EpitopeViewer can be accessed from the IEDB Web site http://www.immuneepitope.org through the quick link 'Browse Records by 3D Structure.' The EpitopeViewer is designed and been tested for use by immunologists with little or no training in molecular graphics. The EpitopeViewer can be launched from most popular Web browsers without user intervention. A Java Runtime Environment (RJE) 1.4.2 or higher is required.
Integrative Genomics Viewer (IGV) | Informatics Technology for Cancer Research (ITCR)
The Integrative Genomics Viewer (IGV) is a high-performance visualization tool for interactive exploration of large, integrated genomic datasets. It supports a wide variety of data types, including array-based and next-generation sequence data, and genomic annotations.
Modeling human comprehension of data visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie
This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less
Usage of stereoscopic visualization in the learning contents of rotational motion.
Matsuura, Shu
2013-01-01
Rotational motion plays an essential role in physics even at an introductory level. In addition, the stereoscopic display of three-dimensional graphics includes is advantageous for the presentation of rotational motions, particularly for depth recognition. However, the immersive visualization of rotational motion has been known to lead to dizziness and even nausea for some viewers. Therefore, the purpose of this study is to examine the onset of nausea and visual fatigue when learning rotational motion through the use of a stereoscopic display. The findings show that an instruction method with intermittent exposure of the stereoscopic display and a simplification of its visual components reduced the onset of nausea and visual fatigue for the viewers, which maintained the overall effect of instantaneous spatial recognition.
Visions of our Planet's Atmosphere, Land and Oceans: Spectacular Visualizations of our Blue Marble
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Starr, David (Technical Monitor)
2002-01-01
The NASA/NOAA Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to South Africa, Cape Town and Johannesburg using NASA Terra/MODIS data, Landsat data and 1 m IKONOS 'Spy Satellite' data. Zoom in to any place South Africa using Earth Viewer 3D from Keyhole Inc. and Landsat data at 30 m resolution. Go back to the early weather satellite images from the 1960s and see them contrasted with the latest US and international global satellite weather movies including hurricanes and 'tornadoes'. See the latest visualizations of spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, Landsat 7 including 1 - min GOES rapid scan image sequences of Nov 9th 2001 Midwest tornadic thunderstorms and have them explained.
Foggy perception slows us down
Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H
2012-01-01
Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog—that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system. DOI: http://dx.doi.org/10.7554/eLife.00031.001 PMID:23110253
Lavaur, Jean-Marc; Bairstow, Dominique
2011-12-01
This research aimed at studying the role of subtitling in film comprehension. It focused on the languages in which the subtitles are written and on the participants' fluency levels in the languages presented in the film. In a preliminary part of the study, the most salient visual and dialogue elements of a short sequence of an English film were extracted by the means of a free recall task after showing two versions of the film (first a silent, then a dubbed-into-French version) to native French speakers. This visual and dialogue information was used in the setting of a questionnaire concerning the understanding of the film presented in the main part of the study, in which other French native speakers with beginner, intermediate, or advanced fluency levels in English were shown one of three versions of the film used in the preliminary part. Respectively, these versions had no subtitles or they included either English or French subtitles. The results indicate a global interaction between all three factors in this study: For the beginners, visual processing dropped from the version without subtitles to that with English subtitles, and even more so if French subtitles were provided, whereas the effect of film version on dialogue comprehension was the reverse. The advanced participants achieved higher comprehension for both types of information with the version without subtitles, and dialogue information processing was always better than visual information processing. The intermediate group similarly processed dialogues in a better way than visual information, but was not affected by film version. These results imply that, depending on the viewers' fluency levels, the language of subtitles can have different effects on movie information processing.
Climate Science in Social Media: What's Worked, and What Hasn't
NASA Astrophysics Data System (ADS)
Sinclair, P.
2015-12-01
A common conception of social media is that the definition of success is a huge number of viewers and followers. While these outcomes not undesirable, they are not the only signs of success. More important than the size of the audience, is how well that audience follows and in turn, propagates the desired message. Dark Snow project has been successful in driving a global conversation about the Greenland ice sheet, not by creating huge numbers of viewers and followers, but due to a significant, and highly motivated, following among media gatekeepers, academic messengers, and social media activists. It's very important that, from the start, the Dark Snow story - that changes in ice sheet albedo may be driving increased melt, was effectively encoded, or "branded", in the project's name - "Dark Snow" - a vivid and easily illustrated visual image. A simple concept that is easy to describe and understand, but profound in implication, has allowed for wide discussion among professionals in science and media, as well as the general public.
JS-MS: a cross-platform, modular javascript viewer for mass spectrometry signals.
Rosen, Jebediah; Handy, Kyle; Gillan, André; Smith, Rob
2017-11-06
Despite the ubiquity of mass spectrometry (MS), data processing tools can be surprisingly limited. To date, there is no stand-alone, cross-platform 3-D visualizer for MS data. Available visualization toolkits require large libraries with multiple dependencies and are not well suited for custom MS data processing modules, such as MS storage systems or data processing algorithms. We present JS-MS, a 3-D, modular JavaScript client application for viewing MS data. JS-MS provides several advantages over existing MS viewers, such as a dependency-free, browser-based, one click, cross-platform install and better navigation interfaces. The client includes a modular Java backend with a novel streaming.mzML parser to demonstrate the API-based serving of MS data to the viewer. JS-MS enables custom MS data processing and evaluation by providing fast, 3-D visualization using improved navigation without dependencies. JS-MS is publicly available with a GPLv2 license at github.com/optimusmoose/jsms.
Watching film for the first time: how adult viewers interpret perceptual discontinuities in film.
Schwan, Stephan; Ildirar, Sermin
2010-07-01
Although film, television, and video play an important role in modern societies, the extent to which the similarities of cinematographic images to natural, unmediated conditions of visual experience contribute to viewers' comprehension is largely an open question. To address this question, we compared 20 inexperienced adult viewers from southern Turkey with groups of medium- and high-experienced adult viewers from the same region. In individual sessions, each participant was shown a set of 14 film clips that included a number of perceptual discontinuities typical for film. The viewers' interpretations were recorded and analyzed. The findings show that it is not the similarity to conditions of natural perception but the presence of a familiar line of action that determines the comprehensibility of films for inexperienced viewers. In the absence of such a line of action, extended prior experience is required for appropriate interpretation of cinematographic images such as those we investigated in this study.
ERIC Educational Resources Information Center
Messaris, Paul; Nielsen, Karen O.
A study examined the influence of viewers' backgrounds on their interpretation of "associational montage" in television advertising (editing which seeks to imply an analogy between the product and a juxtaposed image possessing desirable qualities). Subjects, 32 television professionals from two urban television stations and 95 customers…
Speaking "Out of Place": YouTube Documentaries and Viewers' Comment Culture as Political Education
ERIC Educational Resources Information Center
Piotrowski, Marcelina
2015-01-01
This article examines the comment culture that accompanies documentary films on YouTube as a site of (geo) political education. It considers how viewers try to teach each other about the proper "place" of critique in response to the global, national, and local rhetoric featured in one environmental documentary film. YouTube viewers use…
Assessing natural hazard risk using images and data
NASA Astrophysics Data System (ADS)
Mccullough, H. L.; Dunbar, P. K.; Varner, J. D.; Mungov, G.
2012-12-01
Photographs and other visual media provide valuable pre- and post-event data for natural hazard assessment. Scientific research, mitigation, and forecasting rely on visual data for risk analysis, inundation mapping and historic records. Instrumental data only reveal a portion of the whole story; photographs explicitly illustrate the physical and societal impacts from the event. Visual data is rapidly increasing as the availability of portable high resolution cameras and video recorders becomes more attainable. Incorporating these data into archives ensures a more complete historical account of events. Integrating natural hazards data, such as tsunami, earthquake and volcanic eruption events, socio-economic information, and tsunami deposits and runups along with images and photographs enhances event comprehension. Global historic databases at NOAA's National Geophysical Data Center (NGDC) consolidate these data, providing the user with easy access to a network of information. NGDC's Natural Hazards Image Database (ngdc.noaa.gov/hazardimages) was recently improved to provide a more efficient and dynamic user interface. It uses the Google Maps API and Keyhole Markup Language (KML) to provide geographic context to the images and events. Descriptive tags, or keywords, have been applied to each image, enabling easier navigation and discovery. In addition, the Natural Hazards Map Viewer (maps.ngdc.noaa.gov/viewers/hazards) provides the ability to search and browse data layers on a Mercator-projection globe with a variety of map backgrounds. This combination of features creates a simple and effective way to enhance our understanding of hazard events and risks using imagery.
A study of perceptual analysis in a high-level autistic subject with exceptional graphic abilities.
Mottron, L; Belleville, S
1993-11-01
We report here the case study of a patient (E.C.) with an Asperger syndrome, or autism with quasinormal intelligence, who shows an outstanding ability for three-dimensional drawing of inanimate objects (savant syndrome). An assessment of the subsystems proposed in recent models of object recognition evidenced intact perceptual analysis and identification. The initial (or primal sketch), viewer-centered (or 2-1/2-D), or object-centered (3-D) representations and the recognition and name levels were functional. In contrast, E.C.'s pattern of performance in three different types of tasks converge to suggest an anomaly in the hierarchical organization of the local and global parts of a figure: a local interference effect in incongruent hierarchical visual stimuli, a deficit in relating local parts to global form information in impossible figures, and an absence of feature-grouping in graphic recall. The results are discussed in relation to normal visual perception and to current accounts of the savant syndrome in autism.
Baker, Amanda; Blanchard, Céline
2017-09-01
Research has primarily focused on the consequences of the female thin ideal on women and has largely ignored the effects on men. Two studies were designed to investigate the effects of a female thin ideal video on cognitive (Study 1: appearance schema, Study 2: visual-spatial processing) and self-evaluative measures in male viewers. Results revealed that the female thin ideal predicted men's increased appearance schema activation and poorer cognitive performance on a visual-spatial task. Constructs from self-determination theory (i.e., global autonomous and controlled motivation) were included to help explain for whom the video effects might be strongest or weakest. Findings demonstrated that a global autonomous motivation orientation played a protective role against the effects of the female thin ideal. Given that autonomous motivation was a significant moderator, SDT is an area worth exploring further to determine whether motivational strategies can benefit men who are susceptible to media body ideals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bass, Lee M; Misiewicz, Lawrence
2012-11-01
Wireless capsule endoscopy (WCE) is an increasingly used procedure for visualization of the small intestine. One challenge in pediatric WCE is the placement of the capsule in a population unable to swallow it for a variety of reasons. Here we present a novel use of the real-time (RT) viewer in the endoscopic deployment of the capsule endoscope. We performed a retrospective chart review on all WCE completed at the Children's Memorial Hospital from February 2010 to May 2011. Following a diagnostic upper endoscopy, the RT viewer was attached to the capsule recorder and image was noted before insertion. The endoscope and AdvanCE capsule delivery device were slowly advanced into duodenum while maintaining visualization on the RT viewer. A total of 17 patients who underwent a WCE with endoscopic placement were identified. They ranged in ages from 2 to 19 years. Thirteen patients required endoscopic placement because of the inability to swallow the capsule, whereas 4 were placed during a scheduled procedure to take advantage of sedation and airway protection. All of the 17 patients had successful deployment of the capsule into the duodenal lumen. In each case, the endoscopist was able to confirm capsule location in duodenum during scope withdrawal. There was no evidence of iatrogenic trauma or bleeding in any patient. There were 5 incomplete studies, a completion rate consistent with that described in the literature. The use of the RT viewer for endoscopic deployment of WCE is an effective technique to improve visualization of capsule placement in the pediatric population.
3D Visualization for Planetary Missions
NASA Astrophysics Data System (ADS)
DeWolfe, A. W.; Larsen, K.; Brain, D.
2018-04-01
We have developed visualization tools for viewing planetary orbiters and science data in 3D for both Earth and Mars, using the Cesium Javascript library, allowing viewers to visualize the position and orientation of spacecraft and science data.
Video attention deviation estimation using inter-frame visual saliency map analysis
NASA Astrophysics Data System (ADS)
Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng
2012-01-01
A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.
Visions of our Planet's Atmosphere, Land and Oceans
NASA Technical Reports Server (NTRS)
Hasler, A. F.
2002-01-01
The NASA/NOAA Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to South Africa, Cape Town and Johannesburg using NASA Terra/MODIS data, Landsat data and 1 m IKONOS 'Spy Satellite' data. Zoom in to any place South Africa using Earth Viewer 3D from Keyhole Inc. and Landsat data at 30 m resolution Go back to the early weather satellite images from the 1960s and see them contrasted with the latest US and international global satellite weather movies including hurricanes & 'tornadoes'. See the latest visualizations of spectacular images from NASANOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, Landsat 7 including 1 - min GOES rapid scan image sequences of Nov 9th 2001 Midwest tornadic thunderstorms and have them explained.
Sticks and Stones are Bones: The Eclectic Use of Lines.
ERIC Educational Resources Information Center
Denton, Craig L.
Lines are elemental design devices that provide the primary structure for visual expressions in printed media. Gestalt principles of perception emphasize the role of the viewer, so the energy of the lines and the commercial viability of a particular design depend upon the designer's and photojournalist's understanding of both the viewer's…
FNV: light-weight flash-based network and pathway viewer.
Dannenfelser, Ruth; Lachmann, Alexander; Szenk, Mariola; Ma'ayan, Avi
2011-04-15
Network diagrams are commonly used to visualize biochemical pathways by displaying the relationships between genes, proteins, mRNAs, microRNAs, metabolites, regulatory DNA elements, diseases, viruses and drugs. While there are several currently available web-based pathway viewers, there is still room for improvement. To this end, we have developed a flash-based network viewer (FNV) for the visualization of small to moderately sized biological networks and pathways. Written in Adobe ActionScript 3.0, the viewer accepts simple Extensible Markup Language (XML) formatted input files to display pathways in vector graphics on any web-page providing flexible layout options, interactivity with the user through tool tips, hyperlinks and the ability to rearrange nodes on the screen. FNV was utilized as a component in several web-based systems, namely Genes2Networks, Lists2Networks, KEA, ChEA and PathwayGenerator. In addition, FVN can be used to embed pathways inside pdf files for the communication of pathways in soft publication materials. FNV is available for use and download along with the supporting documentation and sample networks at http://www.maayanlab.net/FNV. avi.maayan@mssm.edu.
Moving through a multiplex holographic scene
NASA Astrophysics Data System (ADS)
Mrongovius, Martina
2013-02-01
This paper explores how movement can be used as a compositional element in installations of multiplex holograms. My holographic images are created from montages of hand-held video and photo-sequences. These spatially dynamic compositions are visually complex but anchored to landmarks and hints of the capturing process - such as the appearance of the photographer's shadow - to establish a sense of connection to the holographic scene. Moving around in front of the hologram, the viewer animates the holographic scene. A perception of motion then results from the viewer's bodily awareness of physical motion and the visual reading of dynamics within the scene or movement of perspective through a virtual suggestion of space. By linking and transforming the physical motion of the viewer with the visual animation, the viewer's bodily awareness - including proprioception, balance and orientation - play into the holographic composition. How multiplex holography can be a tool for exploring coupled, cross-referenced and transformed perceptions of movement is demonstrated with a number of holographic image installations. Through this process I expanded my creative composition practice to consider how dynamic and spatial scenes can be conveyed through the fragmented view of a multiplex hologram. This body of work was developed through an installation art practice and was the basis of my recently completed doctoral thesis: 'The Emergent Holographic Scene — compositions of movement and affect using multiplex holographic images'.
BrainNet Viewer: a network visualization tool for human brain connectomics.
Xia, Mingrui; Wang, Jinhui; He, Yong
2013-01-01
The human brain is a complex system whose topological organization can be represented using connectomics. Recent studies have shown that human connectomes can be constructed using various neuroimaging technologies and further characterized using sophisticated analytic strategies, such as graph theory. These methods reveal the intriguing topological architectures of human brain networks in healthy populations and explore the changes throughout normal development and aging and under various pathological conditions. However, given the huge complexity of this methodology, toolboxes for graph-based network visualization are still lacking. Here, using MATLAB with a graphical user interface (GUI), we developed a graph-theoretical network visualization toolbox, called BrainNet Viewer, to illustrate human connectomes as ball-and-stick models. Within this toolbox, several combinations of defined files with connectome information can be loaded to display different combinations of brain surface, nodes and edges. In addition, display properties, such as the color and size of network elements or the layout of the figure, can be adjusted within a comprehensive but easy-to-use settings panel. Moreover, BrainNet Viewer draws the brain surface, nodes and edges in sequence and displays brain networks in multiple views, as required by the user. The figure can be manipulated with certain interaction functions to display more detailed information. Furthermore, the figures can be exported as commonly used image file formats or demonstration video for further use. BrainNet Viewer helps researchers to visualize brain networks in an easy, flexible and quick manner, and this software is freely available on the NITRC website (www.nitrc.org/projects/bnv/).
Restoring Fort Frontenac in 3D: Effective Usage of 3D Technology for Heritage Visualization
NASA Astrophysics Data System (ADS)
Yabe, M.; Goins, E.; Jackson, C.; Halbstein, D.; Foster, S.; Bazely, S.
2015-02-01
This paper is composed of three elements: 3D modeling, web design, and heritage visualization. The aim is to use computer graphics design to inform and create an interest in historical visualization by rebuilding Fort Frontenac using 3D modeling and interactive design. The final model will be integr ated into an interactive website to learn more about the fort's historic imp ortance. It is apparent that using computer graphics can save time and money when it comes to historical visualization. Visitors do not have to travel to the actual archaeological buildings. They can simply use the Web in their own home to learn about this information virtually. Meticulously following historical records to create a sophisticated restoration of archaeological buildings will draw viewers into visualizations, such as the historical world of Fort Frontenac. As a result, it allows the viewers to effectively understand the fort's social sy stem, habits, and historical events.
A Review on Stereoscopic 3D: Home Entertainment for the Twenty First Century
NASA Astrophysics Data System (ADS)
Karajeh, Huda; Maqableh, Mahmoud; Masa'deh, Ra'ed
2014-12-01
In the last few years, stereoscopic developed very rapidly and employed in many different fields such as entertainment. Due to the importance of entertainment aspect of stereoscopic 3D (S3D) applications, a review of the current state of S3D development in entertainment technology is conducted. In this paper, a novel survey of the stereoscopic entertainment aspects is presented by discussing the significant development of a 3D cinema, the major development of 3DTV, the issues related to 3D video content and 3D video games. Moreover, we reviewed some problems that can be caused in the viewers' visual system from watching stereoscopic contents. Some stereoscopic viewers are not satisfied as they are frustrated from wearing glasses, have visual fatigue, complain from unavailability of 3D contents, and/or complain from some sickness. Therefore, we will discuss stereoscopic visual discomfort and to what extend the viewer will have an eye fatigue while watching 3D contents or playing 3D games. The suggested solutions in the literature for this problem are discussed.
Di Marco, Aimee N; Jeyakumar, Jenifa; Pratt, Philip J; Yang, Guang-Zhong; Darzi, Ara W
2016-01-01
To compare surgical performance with transanal endoscopic surgery (TES) using a novel 3-dimensional (3D) stereoscopic viewer against the current modalities of a 3D stereoendoscope, 3D, and 2-dimensional (2D) high-definition monitors. TES is accepted as the primary treatment for selected rectal tumors. Current TES systems offer a 2D monitor, or 3D image, viewed directly via a stereoendoscope, necessitating an uncomfortable operating position. To address this and provide a platform for future image augmentation, a 3D stereoscopic display was created. Forty participants, of mixed experience level, completed a simulated TES task using 4 visual displays (novel stereoscopic viewer and currently utilized stereoendoscope, 3D, and 2D high-definition monitors) in a randomly allocated order. Primary outcome measures were: time taken, path length, and accuracy. Secondary outcomes were: task workload and participant questionnaire results. Median time taken and path length were significantly shorter for the novel viewer versus 2D and 3D, and not significantly different to the traditional stereoendoscope. Significant differences were found in accuracy, task workload, and questionnaire assessment in favor of the novel viewer, as compared to all 3 modalities. This novel 3D stereoscopic viewer allows surgical performance in TES equivalent to that achieved using the current stereoendoscope and superior to standard 2D and 3D displays, but with lower physical and mental demands for the surgeon. Participants expressed a preference for this system, ranking it more highly on a questionnaire. Clinical translation of this work has begun with the novel viewer being used in 5 TES patients.
Evaluation of DICOM viewer software for workflow integration in clinical trials
NASA Astrophysics Data System (ADS)
Haak, Daniel; Page, Charles E.; Kabino, Klaus; Deserno, Thomas M.
2015-03-01
The digital imaging and communications in medicine (DICOM) protocol is nowadays the leading standard for capture, exchange and storage of image data in medical applications. A broad range of commercial, free, and open source software tools supporting a variety of DICOM functionality exists. However, different from patient's care in hospital, DICOM has not yet arrived in electronic data capture systems (EDCS) for clinical trials. Due to missing integration, even just the visualization of patient's image data in electronic case report forms (eCRFs) is impossible. Four increasing levels for integration of DICOM components into EDCS are conceivable, raising functionality but also demands on interfaces with each level. Hence, in this paper, a comprehensive evaluation of 27 DICOM viewer software projects is performed, investigating viewing functionality as well as interfaces for integration. Concerning general, integration, and viewing requirements the survey involves the criteria (i) license, (ii) support, (iii) platform, (iv) interfaces, (v) two-dimensional (2D) and (vi) three-dimensional (3D) image viewing functionality. Optimal viewers are suggested for applications in clinical trials for 3D imaging, hospital communication, and workflow. Focusing on open source solutions, the viewers ImageJ and MicroView are superior for 3D visualization, whereas GingkoCADx is advantageous for hospital integration. Concerning workflow optimization in multi-centered clinical trials, we suggest the open source viewer Weasis. Covering most use cases, an EDCS and PACS interconnection with Weasis is suggested.
3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.
Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S
2015-10-20
Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Face, Body, and Center of Gravity Mediate Person Detection in Natural Scenes
ERIC Educational Resources Information Center
Bindemann, Markus; Scheepers, Christoph; Ferguson, Heather J.; Burton, A. Mike
2010-01-01
Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene,…
Reading Reception: Mediation and Transparency in Viewers' Accounts of a TV Programme.
ERIC Educational Resources Information Center
Richardson, Kay; Corner, John
This paper addresses questions about the processes involved when viewers "make sense" out of the diverse visual and aural signs of a television program and then render that sense in a spoken account. A pilot study was conducted to explore the manner in which modes of viewing, and talk about viewing, include or exclude recognition of…
ERIC Educational Resources Information Center
Barbero, Basilio Ramos; Pedrosa, Carlos Melgosa; Mate, Esteban Garcia
2012-01-01
The purpose of this study is to determine which 3D viewers should be used for the display of interactive graphic engineering documents, so that the visualization and manipulation of 3D models provide useful support to students of industrial engineering (mechanical, organizational, electronic engineering, etc). The technical features of 26 3D…
Antanaviciute, Agne; Baquero-Perez, Belinda; Watson, Christopher M; Harrison, Sally M; Lascelles, Carolina; Crinnion, Laura; Markham, Alexander F; Bonthron, David T; Whitehouse, Adrian; Carr, Ian M
2017-10-01
Recent methods for transcriptome-wide N 6 -methyladenosine (m 6 A) profiling have facilitated investigations into the RNA methylome and established m 6 A as a dynamic modification that has critical regulatory roles in gene expression and may play a role in human disease. However, bioinformatics resources available for the analysis of m 6 A sequencing data are still limited. Here, we describe m6aViewer-a cross-platform application for analysis and visualization of m 6 A peaks from sequencing data. m6aViewer implements a novel m 6 A peak-calling algorithm that identifies high-confidence methylated residues with more precision than previously described approaches. The application enables data analysis through a graphical user interface, and thus, in contrast to other currently available tools, does not require the user to be skilled in computer programming. m6aViewer and test data can be downloaded here: http://dna2.leeds.ac.uk/m6a. © 2017 Antanaviciute et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Art Expertise and the Processing of Titled Abstract Art.
Mullennix, John W; Robinet, Julien
2018-04-01
The effect of art expertise on viewers' processing of titled visual artwork was examined. The study extended the research of Leder, Carbon, and Ripsas by explicitly selecting art novices and art experts. The study was designed to test assumptions about how expertise modulates context in the form of titles for artworks. Viewers rated a set of abstract paintings for liking and understanding. The type of title accompanying the artwork (descriptive or elaborative) was manipulated. Viewers were allotted as much time as they wished to view each artwork. For judgments of liking, novices and experts both liked artworks with elaborative titles better, with overall rated liking similar for both groups. For judgments of understanding, type of title had no effect on ratings for both novices and experts. However, experts' rated understanding was higher than novices, with experts making their decisions faster than novices. An analysis of viewers' art expertise revealed that expertise was correlated with understanding, but not liking. Overall, the results suggest that both novices and experts integrate title with visual image in similar manner. However, expertise differentially affected liking and understanding. The results differ from those obtained by Leder et al. The differences between studies are discussed.
Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras
NASA Technical Reports Server (NTRS)
Amer, Tahani R.; Goad, William K.
2005-01-01
Wing-Viewer is a computer program for acquisition and reduction of image data acquired by any of five different scientificgrade commercial electronic cameras used at Langley Research center to observe wind-tunnel models coated with pressure or temperature-sensitive paints (PSP/TSP). Wing-Viewer provides full automation of camera operation and acquisition of image data, and has limited data-preprocessing capability for quick viewing of the results of PSP/TSP test images. Wing- Viewer satisfies a requirement for a standard interface between all the cameras and a single personal computer: Written by use of Microsoft Visual C++ and the Microsoft Foundation Class Library as a framework, Wing-Viewer has the ability to communicate with the C/C++ software libraries that run on the controller circuit cards of all five cameras.
Land use mapping and modelling for the Phoenix quadrangle
NASA Technical Reports Server (NTRS)
Place, J. L. (Principal Investigator)
1972-01-01
The author has identified the following significant results. Experimentation with 70mm squares cut from ERTS-1 9.5 inch MSS positive transparencies in an I2S color additive viewer, a Richardson film production viewer at 10X magnification, and in a microfiche viewer at 12X and 18X magnification has indicated that band 5 photography provides the most useful interpretable data. In the I2S viewer high intensities of blue and red light in bands 4 and 6 respectively enhance faint vegetation patterns not easily detectable. Slides produced from 35mm color transparencies made by photographing the I2S viewing screen are suitable visual aids for use during presentation. Interpretation of MSS transparencies allowed compilation of a map of land use change in the Phoenix quadrangle.
Visual Dialect: Ethnovisual and Sociovisual Elements of Design in Public Service Communication.
ERIC Educational Resources Information Center
Schiffman, Carole B.
Graphic design is a form of communication by which visual messages are conveyed to a viewer. Audience needs and views must steer the design process when constructing public service visual messages. Well-educated people may be better able to comprehend visuals which require some level of interpretation or extend beyond their world view. Public…
"Key Visuals" as Correlates of Interest in TV Ads.
ERIC Educational Resources Information Center
Reid, Leonard N.; Haan, David
1979-01-01
Concludes that "key visuals" (single frames of artwork storyboards or finished television commercials that sum up the creative strategy of the whole commercial) can be used to pretest the interest levels of viewers in television commercials. (GT)
Does 3D produce more symptoms of visually induced motion sickness?
Naqvi, Syed Ali Arsalan; Badruddin, Nasreen; Malik, Aamir Saeed; Hazabbah, Wan; Abdullah, Baharudin
2013-01-01
3D stereoscopy technology with high quality images and depth perception provides entertainment to its viewers. However, the technology is not mature yet and sometimes may have adverse effects on viewers. Some viewers have reported discomfort in watching videos with 3D technology. In this research we performed an experiment showing a movie in 2D and 3D environments to participants. Subjective and objective data are recorded and compared in both conditions. Results from subjective reporting shows that Visually Induced Motion Sickness (VIMS) is significantly higher in 3D condition. For objective measurement, ECG data is recorded to find the Heart Rate Variability (HRV), where the LF/HF ratio, which is the index of sympathetic nerve activity, is analyzed to find the changes in the participants' feelings over time. The average scores of nausea, disorientation and total score of SSQ show that there is a significant difference in the 3D condition from 2D. However, LF/HF ratio is not showing significant difference throughout the experiment.
Conveying Global Circulation Patterns in HDTV
NASA Astrophysics Data System (ADS)
Gardiner, N.; Janowiak, J.; Kinzler, R.; Trakinski, V.
2006-12-01
The American Museum of Natural History has partnered with the National Centers for Environmental Prediction (NCEP) to educate general audiences about weather and climate using high definition video broadcasts built from half-hourly global mosaics of infrared (IR) data from five geostationary satellites. The dataset being featured was developed by NCEP to improve precipitation estimates from microwave data that have finer spatial resolution but poorer temporal coverage. The IR data span +/-60 degrees latitude and show circulation patterns at sufficient resolution to teach informal science center visitors about both weather and climate events and concepts. Design and editorial principles for this media program have been guided by lessons learned from production and annual updates of visualizations that cover eight themes in both biological and Earth system sciences. Two formative evaluations on two dates, including interviews and written surveys of 480 museum visitors ranging in age from 13 to over 60, helped refine the design and implementation of the weather and climate program and demonstrated that viewers understood the program's initial literacy objectives, including: (1) conveying the passage of time and currency of visualized data; (2) geographic relationships inherent to atmospheric circulation patterns; and (3) the authenticity of visualized data, i.e., their origin from earth-orbiting satellites. Surveys also indicated an interest and willingness to learn more about weather and climate principles and events. Expanded literacy goals guide ongoing, biweekly production and distribution of global cloud visualization pieces that reach combined audiences of approximately 10 million. Two more rounds of evaluation are planned over the next two years to assess the effectiveness of the media program in addressing these expanded literacy goals.
NASA Technical Reports Server (NTRS)
Lammers, Matt
2017-01-01
Geospatial weather visualization remains predominately a two-dimensional endeavor. Even popular advanced tools like the Nullschool Earth display 2-dimensional fields on a 3-dimensional globe. Yet much of the observational data and model output contains detailed three-dimensional fields. In 2014, NASA and JAXA (Japanese Space Agency) launched the Global Precipitation Measurement (GPM) satellite. Its two instruments, the Dual-frequency Precipitation Radar (DPR) and GPM Microwave Imager (GMI) observe much of the Earth's atmosphere between 65 degrees North Latitude and 65 degrees South Latitude. As part of the analysis and visualization tools developed by the Precipitation Processing System (PPS) Group at NASA Goddard, a series of CesiumJS [Using Cesium Markup Language (CZML), JavaScript (JS) and JavaScript Object Notation (JSON)] -based globe viewers have been developed to improve data acquisition decision making and to enhance scientific investigation of the satellite data. Other demos have also been built to illustrate the capabilities of CesiumJS in presenting atmospheric data, including model forecasts of hurricanes, observed surface radar data, and gridded analyses of global precipitation. This talk will present these websites and the various workflows used to convert binary satellite and model data into a form easily integrated with CesiumJS.
The Best Colors for Audio-Visual Materials for More Effective Instruction.
ERIC Educational Resources Information Center
Start, Jay
A number of variables may affect the ability of students to perceive, and learn from, instructional materials. The objectives of the study presented here were to determine the projected color that provided the best visual acuity for the viewer, and the necessary minimum exposure time for achieving maximum visual acuity. Fifty…
Gerth, Victor E; Vize, Peter D
2005-04-01
The Gene Expression Viewer is a web-launched three-dimensional visualization tool, tailored to compare surface reconstructions of multi-channel image volumes generated by confocal microscopy or micro-CT.
An empirical investigation of the visual rightness theory of picture perception.
Locher, Paul J
2003-10-01
This research subjected the visual rightness theory of picture perception to experimental scrutiny. It investigated the ability of adults untrained in the visual arts to discriminate between reproductions of original abstract and representational paintings by renowned artists from two experimentally manipulated less well-organized versions of each art stimulus. Perturbed stimuli contained either minor or major disruptions in the originals' principal structural networks. It was found that participants were significantly more successful in discriminating between originals and their highly altered, but not slightly altered, perturbation than expected by chance. Accuracy of detection was found to be a function of style of painting and a viewer's way of thinking about a work as determined from their verbal reactions to it. Specifically, hit rates for originals were highest for abstract works when participants focused on their compositional style and form and highest for representational works when their content and realism were the focus of attention. Findings support the view that visually right (i.e., "good") compositions have efficient structural organizations that are visually salient to viewers who lack formal training in the visual arts.
Stepping Into Science Data: Data Visualization in Virtual Reality
NASA Astrophysics Data System (ADS)
Skolnik, S.
2017-12-01
Have you ever seen people get really excited about science data? Navteca, along with the Earth Science Technology Office (ESTO), within the Earth Science Division of NASA's Science Mission Directorate have been exploring virtual reality (VR) technology for the next generation of Earth science technology information systems. One of their first joint experiments was visualizing climate data from the Goddard Earth Observing System Model (GEOS) in VR, and the resulting visualizations greatly excited the scientific community. This presentation will share the value of VR for science, such as the capability of permitting the observer to interact with data rendered in real-time, make selections, and view volumetric data in an innovative way. Using interactive VR hardware (headset and controllers), the viewer steps into the data visualizations, physically moving through three-dimensional structures that are traditionally displayed as layers or slices, such as cloud and storm systems from NASA's Global Precipitation Measurement (GPM). Results from displaying this precipitation and cloud data show that there is interesting potential for scientific visualization, 3D/4D visualizations, and inter-disciplinary studies using VR. Additionally, VR visualizations can be leveraged as 360 content for scientific communication and outreach and VR can be used as a tool to engage policy and decision makers, as well as the public.
On Violence against Objects: A Visual Chord
ERIC Educational Resources Information Center
Staley, David J.
2010-01-01
"On Violence Against Objects" is best viewed over several minutes; allow the images to go through several iterations in order to see as many juxtapositions as possible.The visual argument of the work emerges as the viewer perceives analogies between the various images.
NASA Technical Reports Server (NTRS)
Senger, Steven O.
1998-01-01
Volumetric data sets have become common in medicine and many sciences through technologies such as computed x-ray tomography (CT), magnetic resonance (MR), positron emission tomography (PET), confocal microscopy and 3D ultrasound. When presented with 2D images humans immediately and unconsciously begin a visual analysis of the scene. The viewer surveys the scene identifying significant landmarks and building an internal mental model of presented information. The identification of features is strongly influenced by the viewers expectations based upon their expert knowledge of what the image should contain. While not a conscious activity, the viewer makes a series of choices about how to interpret the scene. These choices occur in parallel with viewing the scene and effectively change the way the viewer sees the image. It is this interaction of viewing and choice which is the basis of many familiar visual illusions. This is especially important in the interpretation of medical images where it is the expert knowledge of the radiologist which interprets the image. For 3D data sets this interaction of view and choice is frustrated because choices must precede the visualization of the data set. It is not possible to visualize the data set with out making some initial choices which determine how the volume of data is presented to the eye. These choices include, view point orientation, region identification, color and opacity assignments. Further compounding the problem is the fact that these visualization choices are defined in terms of computer graphics as opposed to language of the experts knowledge. The long term goal of this project is to develop an environment where the user can interact with volumetric data sets using tools which promote the utilization of expert knowledge by incorporating visualization and choice into a tight computational loop. The tools will support activities involving the segmentation of structures, construction of surface meshes and local filtering of the data set. To conform to this environment tools should have several key attributes. First, they should be only rely on computations over a local neighborhood of the probe position. Second, they should operate iteratively over time converging towards a limit behavior. Third, they should adapt to user input modifying they operational parameters with time.
The reference frame of figure-ground assignment.
Vecera, Shaun P
2004-10-01
Figure-ground assignment involves determining which visual regions are foreground figures and which are backgrounds. Although figure-ground processes provide important inputs to high-level vision, little is known about the reference frame in which the figure's features and parts are defined. Computational approaches have suggested a retinally based, viewer-centered reference frame for figure-ground assignment, but figural assignment could also be computed on the basis of environmental regularities in an environmental reference frame. The present research used a newly discovered cue, lower region, to examine the reference frame of figure-ground assignment. Possible reference frames were misaligned by changing the orientation of viewers by having them tilt their heads (Experiments 1 and 2) or turn them upside down (Experiment 3). The results of these experiments indicated that figure-ground perception followed the orientation of the viewer, suggesting a viewer-centered reference frame for figure-ground assignment.
A Hyperbolic Ontology Visualization Tool for Model Application Programming Interface Documentation
NASA Technical Reports Server (NTRS)
Hyman, Cody
2011-01-01
Spacecraft modeling, a critically important portion in validating planned spacecraft activities, is currently carried out using a time consuming method of mission to mission model implementations and integration. A current project in early development, Integrated Spacecraft Analysis (ISCA), aims to remedy this hindrance by providing reusable architectures and reducing time spent integrating models with planning and sequencing tools. The principle objective of this internship was to develop a user interface for an experimental ontology-based structure visualization of navigation and attitude control system modeling software. To satisfy this, a number of tree and graph visualization tools were researched and a Java based hyperbolic graph viewer was selected for experimental adaptation. Early results show promise in the ability to organize and display large amounts of spacecraft model documentation efficiently and effectively through a web browser. This viewer serves as a conceptual implementation for future development but trials with both ISCA developers and end users should be performed to truly evaluate the effectiveness of continued development of such visualizations.
Hsieh, Paul A.; Winston, Richard B.
2002-01-01
Model Viewer is a computer program that displays the results of three-dimensional groundwater models. Scalar data (such as hydraulic head or solute concentration) may be displayed as a solid or a set of isosurfaces, using a red-to-blue color spectrum to represent a range of scalar values. Vector data (such as velocity or specific discharge) are represented by lines oriented to the vector direction and scaled to the vector magnitude. Model Viewer can also display pathlines, cells or nodes that represent model features such as streams and wells, and auxiliary graphic objects such as grid lines and coordinate axes. Users may crop the model grid in different orientations to examine the interior structure of the data. For transient simulations, Model Viewer can animate the time evolution of the simulated quantities. The current version (1.0) of Model Viewer runs on Microsoft Windows 95, 98, NT and 2000 operating systems, and supports the following models: MODFLOW-2000, MODFLOW-2000 with the Ground-Water Transport Process, MODFLOW-96, MOC3D (Version 3.5), MODPATH, MT3DMS, and SUTRA (Version 2D3D.1). Model Viewer is designed to directly read input and output files from these models, thus minimizing the need for additional postprocessing. This report provides an overview of Model Viewer. Complete instructions on how to use the software are provided in the on-line help pages.
Task-Driven Evaluation of Aggregation in Time Series Visualization
Albers, Danielle; Correll, Michael; Gleicher, Michael
2014-01-01
Many visualization tasks require the viewer to make judgments about aggregate properties of data. Recent work has shown that viewers can perform such tasks effectively, for example to efficiently compare the maximums or means over ranges of data. However, this work also shows that such effectiveness depends on the designs of the displays. In this paper, we explore this relationship between aggregation task and visualization design to provide guidance on matching tasks with designs. We combine prior results from perceptual science and graphical perception to suggest a set of design variables that influence performance on various aggregate comparison tasks. We describe how choices in these variables can lead to designs that are matched to particular tasks. We use these variables to assess a set of eight different designs, predicting how they will support a set of six aggregate time series comparison tasks. A crowd-sourced evaluation confirms these predictions. These results not only provide evidence for how the specific visualizations support various tasks, but also suggest using the identified design variables as a tool for designing visualizations well suited for various types of tasks. PMID:25343147
What do we perceive from motion pictures? A computational account.
Cheong, Loong-Fah; Xiang, Xu
2007-06-01
Cinema viewed from a location other than a canonical viewing point (CVP) presents distortions to the viewer in both its static and its dynamic aspects. Past works have investigated mainly the static aspect of this problem and attempted to explain why viewers still seem to perceive the scene very well. The dynamic aspect of depth perception, which is known as structure from motion, and its possible distortion, have not been well investigated. We derive the dynamic depth cues perceived by the viewer and use the so-called isodistortion framework to understand its distortion. The result is that viewers seated at a reasonably central position experience a shift in the intrinsic parameters of their visual systems. Despite this shift, the key properties of the perceived depths remain largely the same, being determined in the main by the accuracy to which extrinsic motion parameters can be recovered. For a viewer seated at a noncentral position and watching the movie screen at a slant angle, the view is related to the view at the CVP by a homography, resulting in various aberrations such as noncentral projection.
Solimini, Angelo G.
2013-01-01
Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530
Solimini, Angelo G
2013-01-01
The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators.
Visualization of protein sequence features using JavaScript and SVG with pViz.js.
Mukhyala, Kiran; Masselot, Alexandre
2014-12-01
pViz.js is a visualization library for displaying protein sequence features in a Web browser. By simply providing a sequence and the locations of its features, this lightweight, yet versatile, JavaScript library renders an interactive view of the protein features. Interactive exploration of protein sequence features over the Web is a common need in Bioinformatics. Although many Web sites have developed viewers to display these features, their implementations are usually focused on data from a specific source or use case. Some of these viewers can be adapted to fit other use cases but are not designed to be reusable. pViz makes it easy to display features as boxes aligned to a protein sequence with zooming functionality but also includes predefined renderings for secondary structure and post-translational modifications. The library is designed to further customize this view. We demonstrate such applications of pViz using two examples: a proteomic data visualization tool with an embedded viewer for displaying features on protein structure, and a tool to visualize the results of the variant_effect_predictor tool from Ensembl. pViz.js is a JavaScript library, available on github at https://github.com/Genentech/pviz. This site includes examples and functional applications, installation instructions and usage documentation. A Readme file, which explains how to use pViz with examples, is available as Supplementary Material A. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Plug and Play web-based visualization of mobile air monitoring data (Abstract)
EPA’s Real-Time Geospatial (RETIGO) Data Viewer web-based tool is a new program reducing the technical barrier to visualize and understand geospatial air data time series collected using wearable, bicycle-mounted, or vehicle-mounted air sensors. The RETIGO tool, with anticipated...
Reappraising Abstract Paintings after Exposure to Background Information
Park, Seongmin A.; Yun, Kyongsik; Jeong, Jaeseung
2015-01-01
Can knowledge help viewers when they appreciate an artwork? Experts’ judgments of the aesthetic value of a painting often differ from the estimates of naïve viewers, and this phenomenon is especially pronounced in the aesthetic judgment of abstract paintings. We compared the changes in aesthetic judgments of naïve viewers while they were progressively exposed to five pieces of background information. The participants were asked to report their aesthetic judgments of a given painting after each piece of information was presented. We found that commentaries by the artist and a critic significantly increased the subjective aesthetic ratings. Does knowledge enable experts to attend to the visual features in a painting and to link it to the evaluative conventions, thus potentially causing different aesthetic judgments? To investigate whether a specific pattern of attention is essential for the knowledge-based appreciation, we tracked the eye movements of subjects while viewing a painting with a commentary by the artist and with a commentary by a critic. We observed that critics’ commentaries directed the viewers’ attention to the visual components that were highly relevant to the presented commentary. However, attention to specific features of a painting was not necessary for increasing the subjective aesthetic judgment when the artists’ commentary was presented. Our results suggest that at least two different cognitive mechanisms may be involved in knowledge- guided aesthetic judgments while viewers reappraise a painting. PMID:25945789
Moveable Feast: A Distributed-Data Case Study Engine for Yotc
NASA Astrophysics Data System (ADS)
Mapes, B. E.
2014-12-01
The promise of YOTC, a richly detailed global view of the tropical atmosphere and its processes down to 1/4 degree resolution, can now be attained without a lot of downloading and programming chores. Many YOTC datasets are served online: all the global reanalyses, including the YOTC-specific ECMWF 1/4 degree set, as well as satellite data including IR and TRMM 3B42. Data integration and visualization are easy with a new YOTC 'case study engine' in the free, all-platform, click-to-install Integrated Data Viewer (IDV) software from Unidata. All the dataset access points, along with many evocative and adjustable display layers, can be loaded with a single click (and then a few minutes wait), using the special YOTC bundle in the Mapes IDV collection (http://www.rsmas.miami.edu/users/bmapes/MapesIDVcollection.html). Time ranges can be adjusted with a calendar widget, and spatial subset regions can be selected with a shift-rubberband mouse operation. The talk will showcase visualizations of several YOTC weather events and process estimates, and give a view of how these and any other YOTC cases can be reproduced on any networked computer.
Eye-Tracking in the Study of Visual Expertise: Methodology and Approaches in Medicine
ERIC Educational Resources Information Center
Fox, Sharon E.; Faulkner-Jones, Beverly E.
2017-01-01
Eye-tracking is the measurement of eye motions and point of gaze of a viewer. Advances in this technology have been essential to our understanding of many forms of visual learning, including the development of visual expertise. In recent years, these studies have been extended to the medical professions, where eye-tracking technology has helped us…
Wayne Tlusty
1979-01-01
The concept of Visual Absorption Capability (VAC) is widely used by Forest Service Landscape Architects. The use of computer generated graphics can aid in combining times an area is seen, distance from observer and land aspect relative viewer; to determine visual magnitude. Perspective Plot allows both fast and inexpensive graphic analysis of VAC allocations, for...
Lightweight genome viewer: portable software for browsing genomics data in its chromosomal context
Faith, Jeremiah J; Olson, Andrew J; Gardner, Timothy S; Sachidanandam, Ravi
2007-01-01
Background Lightweight genome viewer (lwgv) is a web-based tool for visualization of sequence annotations in their chromosomal context. It performs most of the functions of larger genome browsers, while relying on standard flat-file formats and bypassing the database needs of most visualization tools. Visualization as an aide to discovery requires display of novel data in conjunction with static annotations in their chromosomal context. With database-based systems, displaying dynamic results requires temporary tables that need to be tracked for removal. Results lwgv simplifies the visualization of user-generated results on a local computer. The dynamic results of these analyses are written to transient files, which can import static content from a more permanent file. lwgv is currently used in many different applications, from whole genome browsers to single-gene RNAi design visualization, demonstrating its applicability in a large variety of contexts and scales. Conclusion lwgv provides a lightweight alternative to large genome browsers for visualizing biological annotations and dynamic analyses in their chromosomal context. It is particularly suited for applications ranging from short sequences to medium-sized genomes when the creation and maintenance of a large software and database infrastructure is not necessary or desired. PMID:17877794
Lightweight genome viewer: portable software for browsing genomics data in its chromosomal context.
Faith, Jeremiah J; Olson, Andrew J; Gardner, Timothy S; Sachidanandam, Ravi
2007-09-18
Lightweight genome viewer (lwgv) is a web-based tool for visualization of sequence annotations in their chromosomal context. It performs most of the functions of larger genome browsers, while relying on standard flat-file formats and bypassing the database needs of most visualization tools. Visualization as an aide to discovery requires display of novel data in conjunction with static annotations in their chromosomal context. With database-based systems, displaying dynamic results requires temporary tables that need to be tracked for removal. lwgv simplifies the visualization of user-generated results on a local computer. The dynamic results of these analyses are written to transient files, which can import static content from a more permanent file. lwgv is currently used in many different applications, from whole genome browsers to single-gene RNAi design visualization, demonstrating its applicability in a large variety of contexts and scales. lwgv provides a lightweight alternative to large genome browsers for visualizing biological annotations and dynamic analyses in their chromosomal context. It is particularly suited for applications ranging from short sequences to medium-sized genomes when the creation and maintenance of a large software and database infrastructure is not necessary or desired.
Differential emotion attribution to neutral faces of own and other races.
Hu, Chao S; Wang, Qiandong; Han, Tong; Weare, Ethan; Fu, Genyue
2017-02-01
Past research has demonstrated differential recognition of emotion on faces of different races. This paper reports the first study to explore differential emotion attribution to neutral faces of different races. Chinese and Caucasian adults viewed a series of Chinese and Caucasian neutral faces and judged their outward facial expression: neutral, positive, or negative. The results showed that both Chinese and Caucasian viewers perceived more Chinese faces than Caucasian faces as neutral. Nevertheless, Chinese viewers attributed positive emotion to Caucasian faces more than to Chinese faces, whereas Caucasian viewers attributed negative emotion to Caucasian faces more than to Chinese faces. Moreover, Chinese viewers attributed negative and neutral emotion to the faces of both races without significant difference in frequency, whereas Caucasian viewers mostly attributed neutral emotion to the faces. These differences between Chinese and Caucasian viewers may be due to differential visual experience, culture, racial stereotype, or expectation of the experiment. We also used eye tracking among the Chinese participants to explore the relationship between face-processing strategy and emotion attribution to neutral faces. The results showed that the interaction between emotion attribution and face race was significant on face-processing strategy, such as fixation proportion on eyes and saccade amplitude. Additionally, pupil size during processing Caucasian faces was larger than during processing Chinese faces.
Instructional Television: Visual Production Techniques and Learning Comprehension.
ERIC Educational Resources Information Center
Silbergleid, Michael Ian
The purpose of this study was to determine if increasing levels of complexity in visual production techniques would increase the viewer's learning comprehension and the degree of likeness expressed for a college level instructional television program. A total of 119 mass communications students at the University of Alabama participated in the…
Interactive Visualization of National Airspace Data in 4D (IV4D)
2010-08-01
Research Laboratory) JView graphics engine. All of the software, IV4D/Viewer/JView, is written in Java and is platform independent, meaning that it...both parts. 11 3.3.1.1 Airspace Volumes Once appropriate CSV or ACES XML airspace boundary files are selected from a standard Java File Chooser...persistence mechanism, Hibernate , was replaced with JDBC specific code and, over time, quite a bit of JDBC support code was added to the Viewer and to
TerraLook: GIS-Ready Time-Series of Satellite Imagery for Monitoring Change
,
2008-01-01
TerraLook is a joint project of the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA) Jet Propulsion Laboratory (JPL) with a goal of providing satellite images that anyone can use to see changes in the Earth's surface over time. Each TerraLook product is a user-specified collection of satellite images selected from imagery archived at the USGS Earth Resources Observation and Science (EROS) Center. Images are bundled with standards-compliant metadata, a world file, and an outline of each image's ground footprint, enabling their use in geographic information systems (GIS), image processing software, and Web mapping applications. TerraLook images are available through the USGS Global Visualization Viewer (http://glovis.usgs.gov).
Aging affects the balance between goal-guided and habitual spatial attention.
Twedell, Emily L; Koutstaal, Wilma; Jiang, Yuhong V
2017-08-01
Visual clutter imposes significant challenges to older adults in everyday tasks and often calls on selective processing of relevant information. Previous research has shown that both visual search habits and task goals influence older adults' allocation of spatial attention, but has not examined the relative impact of these two sources of attention when they compete. To examine how aging affects the balance between goal-driven and habitual attention, and to inform our understanding of different attentional subsystems, we tested young and older adults in an adapted visual search task involving a display laid flat on a desk. To induce habitual attention, unbeknownst to participants, the target was more often placed in one quadrant than in the others. All participants rapidly acquired habitual attention toward the high-probability quadrant. We then informed participants where the high-probability quadrant was and instructed them to search that screen location first-but pitted their habit-based, viewer-centered search against this instruction by requiring participants to change their physical position relative to the desk. Both groups prioritized search in the instructed location, but this effect was stronger in young adults than in older adults. In contrast, age did not influence viewer-centered search habits: the two groups showed similar attentional preference for the visual field where the target was most often found before. Aging disrupted goal-guided but not habitual attention. Product, work, and home design for people of all ages--but especially for older individuals--should take into account the strong viewer-centered nature of habitual attention.
Monitoring Global Food Security with New Remote Sensing Products and Tools
NASA Astrophysics Data System (ADS)
Budde, M. E.; Rowland, J.; Senay, G. B.; Funk, C. C.; Husak, G. J.; Magadzire, T.; Verdin, J. P.
2012-12-01
Global agriculture monitoring is a crucial aspect of monitoring food security in the developing world. The Famine Early Warning Systems Network (FEWS NET) has a long history of using remote sensing and crop modeling to address food security threats in the form of drought, floods, pests, and climate change. In recent years, it has become apparent that FEWS NET requires the ability to apply monitoring and modeling frameworks at a global scale to assess potential impacts of foreign production and markets on food security at regional, national, and local levels. Scientists at the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center and the University of California Santa Barbara (UCSB) Climate Hazards Group have provided new and improved data products as well as visualization and analysis tools in support of the increased mandate for remote monitoring. We present our monitoring products for measuring actual evapotranspiration (ETa), normalized difference vegetation index (NDVI) in a near-real-time mode, and satellite-based rainfall estimates and derivatives. USGS FEWS NET has implemented a Simplified Surface Energy Balance (SSEB) model to produce operational ETa anomalies for Africa and Central Asia. During the growing season, ETa anomalies express surplus or deficit crop water use, which is directly related to crop condition and biomass. We present current operational products and provide supporting validation of the SSEB model. The expedited Moderate Resolution Imaging Spectroradiometer (eMODIS) production system provides FEWS NET with an improved NDVI dataset for crop and rangeland monitoring. eMODIS NDVI provides a reliable data stream with a relatively high spatial resolution (250-m) and short latency period (less than 12 hours) which allows for better operational vegetation monitoring. We provide an overview of these data and cite specific applications for crop monitoring. FEWS NET uses satellite rainfall estimates as inputs for monitoring agricultural food production and driving crop water balance models. We present a series of derived rainfall products and provide an update on efforts to improve satellite-based estimates. We also present advancements in monitoring tools, namely, the Early Warning eXplorer (EWX) and interactive rainfall and NDVI time series viewers. The EWX is a data analysis and visualization tool that allows users to rapidly visualize multiple remote sensing datasets and compare standardized anomaly maps and time series. The interactive time series viewers allow users to analyze rainfall and NDVI time series over multiple spatial domains. New and improved data products and more targeted analysis tools are a necessity as food security monitoring requirements expand and resources become limited.
Are visual cue masking and removal techniques equivalent for studying perceptual skills in sport?
Mecheri, Sami; Gillet, Eric; Thouvarecq, Regis; Leroy, David
2011-01-01
The spatial-occlusion paradigm makes use of two techniques (masking and removing visual cues) to provide information about the anticipatory cues used by viewers. The visual scene resulting from the removal technique appears to be incongruous, but the assumed equivalence of these two techniques is spreading. The present study was designed to address this issue by combining eye-movement recording with the two types of occlusion (removal versus masking) in a tennis serve-return task. Response accuracy and decision onsets were analysed. The results indicated that subjects had longer reaction times under the removal condition, with an identical proportion of correct responses. Also, the removal technique caused the subjects to rely on atypical search patterns. Our findings suggest that, when the removal technique was used, viewers were unable to systematically count on stored memories to help them accomplish the interception task. The persistent failure to question some of the assumptions about the removal technique in applied visual research is highlighted, and suggestions for continued use of the masking technique are advanced.
Stereoscopy in cinematographic synthetic imagery
NASA Astrophysics Data System (ADS)
Eisenmann, Jonathan; Parent, Rick
2009-02-01
In this paper we present experiments and results pertaining to the perception of depth in stereoscopic viewing of synthetic imagery. In computer animation, typical synthetic imagery is highly textured and uses stylized illumination of abstracted material models by abstracted light source models. While there have been numerous studies concerning stereoscopic capabilities, conventions for staging and cinematography in stereoscopic movies have not yet been well-established. Our long-term goal is to measure the effectiveness of various cinematography techniques on the human visual system in a theatrical viewing environment. We would like to identify the elements of stereoscopic cinema that are important in terms of enhancing the viewer's understanding of a scene as well as providing guidelines for the cinematographer relating to storytelling. In these experiments we isolated stereoscopic effects by eliminating as many other visual cues as is reasonable. In particular, we aim to empirically determine what types of movement in synthetic imagery affect the perceptual depth sensing capabilities of our viewers. Using synthetic imagery, we created several viewing scenarios in which the viewer is asked to locate a target object's depth in a simple environment. The scenarios were specifically designed to compare the effectiveness of stereo viewing, camera movement, and object motion in aiding depth perception. Data were collected showing the error between the choice of the user and the actual depth value, and patterns were identified that relate the test variables to the viewer's perceptual depth accuracy in our theatrical viewing environment.
The Saccharomyces Genome Database Variant Viewer
Sheppard, Travis K.; Hitz, Benjamin C.; Engel, Stacia R.; Song, Giltae; Balakrishnan, Rama; Binkley, Gail; Costanzo, Maria C.; Dalusag, Kyla S.; Demeter, Janos; Hellerstedt, Sage T.; Karra, Kalpana; Nash, Robert S.; Paskov, Kelley M.; Skrzypek, Marek S.; Weng, Shuai; Wong, Edith D.; Cherry, J. Michael
2016-01-01
The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is the authoritative community resource for the Saccharomyces cerevisiae reference genome sequence and its annotation. In recent years, we have moved toward increased representation of sequence variation and allelic differences within S. cerevisiae. The publication of numerous additional genomes has motivated the creation of new tools for their annotation and analysis. Here we present the Variant Viewer: a dynamic open-source web application for the visualization of genomic and proteomic differences. Multiple sequence alignments have been constructed across high quality genome sequences from 11 different S. cerevisiae strains and stored in the SGD. The alignments and summaries are encoded in JSON and used to create a two-tiered dynamic view of the budding yeast pan-genome, available at http://www.yeastgenome.org/variant-viewer. PMID:26578556
NASA Astrophysics Data System (ADS)
Chen, Tien-Li; Pan, Fang-Ming; Tsai, Jen-Hui
2013-03-01
This study aimed to investigate the correlation of the image associated by the design Co-Brand (Jimmy S.P.A. and STRAUSS) and the impression perceived by subject of viewers. Visual images were used to examine the merit of the evaluation. The best result is provided using an object as an appropriate evaluation method. There are a lot of factors which influence to evaluation of a design. This study is limited to distinguish the appearance from Jimmy's picture books transform furniture and so on. Co-Brand of Jimmy S.P.A. and STRAUSS is not easy because there are not from the same cultural, and industry background and applying different marketing strategy, it is a way to combine the two brands by designing, used questionnaire of SD (Semantic differential evaluation) evaluation method to test out the perception of viewers, the objective of this study is to investigate and appraised the Co-Brands use by of the image in furniture from patrons. SD evaluation result showed, if design cannot understand the perception image of Jimmy S.P.A and STRAUSS with viewers mind, furniture design also can't transmit feeling with design.
Visual perception and stereoscopic imaging: an artist's perspective
NASA Astrophysics Data System (ADS)
Mason, Steve
2015-03-01
This paper continues my 2014 February IS and T/SPIE Convention exploration into the relationship of stereoscopic vision and consciousness (90141F-1). It was proposed then that by using stereoscopic imaging people may consciously experience, or see, what they are viewing and thereby help make them more aware of the way their brains manage and interpret visual information. Environmental imaging was suggested as a way to accomplish this. This paper is the result of further investigation, research, and follow-up imaging. A show of images, that is a result of this research, allows viewers to experience for themselves the effects of stereoscopy on consciousness. Creating dye-infused aluminum prints while employing ChromaDepth® 3D glasses, I hope to not only raise awareness of visual processing but also explore the differences and similarities between the artist and scientist―art increases right brain spatial consciousness, not only empirical thinking, while furthering the viewer's cognizance of the process of seeing. The artist must abandon preconceptions and expectations, despite what the evidence and experience may indicate in order to see what is happening in his work and to allow it to develop in ways he/she could never anticipate. This process is then revealed to the viewer in a show of work. It is in the experiencing, not just from the thinking, where insight is achieved. Directing the viewer's awareness during the experience using stereoscopic imaging allows for further understanding of the brain's function in the visual process. A cognitive transformation occurs, the preverbal "left/right brain shift," in order for viewers to "see" the space. Using what we know from recent brain research, these images will draw from certain parts of the brain when viewed in two dimensions and different ones when viewed stereoscopically, a shift, if one is looking for it, which is quite noticeable. People who have experienced these images in the context of examining their own visual process have been startled by the effect they have on how they perceive the world around them. For instance, when viewing the mountains on a trip to Montana, one woman exclaimed, "I could no longer see just mountains, but also so many amazing colors and shapes"―she could see beyond her preconceptions of mountains to realize more of the beauty that was really there, not just the objects she "thought" to be there. The awareness gained from experiencing the artist's perspective will help with creative thinking in particular and overall research in general. Perceiving the space in these works, completely removing the picture-plane by use of the 3D glasses, making a conscious connection between the feeling and visual content, and thus gaining a deeper appreciation of the visual process will all contribute to understanding how our thinking, our left-brain domination, gets in the way of our seeing what is right in front of us. We fool ourselves with concept and memory―experiencing these prints may help some come a little closer to reality.
Examples of Pre-College Programs that Teach Sustainability
NASA Astrophysics Data System (ADS)
Passow, M. J.
2015-12-01
Programs to help pre-college students understand the importance of Sustainability can be found around the world. A key feature for many is the collaboration among educators, researchers, and business. Two examples will be described to indicate what is being done and goals for the future. "Educação para a Sustentabilidade" ("Education for Sustainability", http://sustentabilidade.colband.net.br/) developed at the Colegio Bandeirantes in Sao Paulo, Brazil, is a popular extracurricular offering at one of Brazil's top schools that empowers students to investigate major issues facing their country and the world. They recognized that merely knowing is insufficient, so they have created several efforts towards an "environmentally friendly, socially just, and economically viable" world. The Education Project for Sustainability Science interacts with students in various grade levels within the school, participates in sustainability initiatives in other parts of the nation, and communicates electronically with like-minded programs in other countries. A second example will spotlight the CHANGE Viewer (Climate and Health Analysis for Global Education Viewer, http://climatechangehumanhealth.org/), a visualization tool that uses NASA World Wind to explore climate science through socio-economic datasets. Collaboration among scientists, programmers, and classroom educators created a suite of activities available to teach about Food Security, Water Resources, Rising Sea Level, and other themes.
The Role of Clarity and Blur in Guiding Visual Attention in Photographs
ERIC Educational Resources Information Center
Enns, James T.; MacDonald, Sarah C.
2013-01-01
Visual artists and photographers believe that a viewer's gaze can be guided by selective use of image clarity and blur, but there is little systematic research. In this study, participants performed several eye-tracking tasks with the same naturalistic photographs, including recognition memory for the entire photo, as well as recognition memory…
EEG based time and frequency dynamics analysis of visually induced motion sickness (VIMS).
Arsalan Naqvi, Syed Ali; Badruddin, Nasreen; Jatoi, Munsif Ali; Malik, Aamir Saeed; Hazabbah, Wan; Abdullah, Baharudin
2015-12-01
3D movies are attracting the viewers as they can see the objects flying out of the screen. However, many viewers have reported various problems which are usually faced after watching 3D movies. These problems include visual fatigue, eye strain, headaches, dizziness, blurred vision or collectively may be termed as visually induced motion sickness (VIMS). This research focuses on the comparison between 3D passive technology with a conventional 2D technology to find that whether 3D is causing trouble in the viewers or not. For this purpose, an experiment was designed in which participants were randomly assigned to watch 2D or a 3D movie. The movie was specially designed to induce VIMS. The movie was shown for the duration of 10 min to every participant. The electroencephalogram (EEG) data was recorded throughout the session. At the end of the session, participants rated their feelings using simulator sickness questionnaire (SSQ). The SSQ data was analyzed and the ratings of 2D and 3D participants were compared statistically by using a two tailed t test. From the SSQ results, it was found that participants watching 3D movies reported significantly higher symptoms of VIMS (p value <0.05). EEG data was analyzed by using MATLAB and topographic plots are created from the data. A significant difference has been observed in the frontal-theta power which increases with the passage of time in 2D condition while decreases with time in 3D condition. Also, a decrease in beta power has been found in the temporal lobe of 3D group. Therefore, it is concluded that there are negative effects of 3D movies causing significant changes in the brain activity in terms of band powers. This condition leads to produce symptoms of VIMS in the viewers.
NASA Astrophysics Data System (ADS)
Kassin, A.; Cody, R. P.; Barba, M.; Escarzaga, S. M.; Villarreal, S.; Manley, W. F.; Gaylord, A. G.; Habermann, T.; Kozimor, J.; Score, R.; Tweedie, C. E.
2017-12-01
To better assess progress in Arctic Observing made by U.S. SEARCH, NSF AON, SAON, and related initiatives, an updated version of the Arctic Observing Viewer (AOV; http://ArcticObservingViewer.org) has been released. This web mapping application and information system conveys the who, what, where, and when of "data collection sites" - the precise locations of monitoring assets, observing platforms, and wherever repeat marine or terrestrial measurements have been taken. Over 13,000 sites across the circumarctic are documented including a range of boreholes, ship tracks, buoys, towers, sampling stations, sensor networks, vegetation plots, stream gauges, ice cores, observatories, and more. Contributing partners are the U.S. NSF, NOAA, the NSF Arctic Data Center, ADIwg, AOOS, a2dc, CAFF, GINA, IASOA, INTERACT, NASA ABoVE, and USGS, among others. Users can visualize, navigate, select, search, draw, print, view details, and follow links to obtain a comprehensive perspective of environmental monitoring efforts. We continue to develop, populate, and enhance AOV. Recent updates include: a vastly improved Search tool with free text queries, autocomplete, and filters; faster performance; a new clustering visualization; heat maps to highlight concentrated research; and 3-D represented data to more easily identify trends. AOV is founded on principles of interoperability, such that agencies and organizations can use the AOV Viewer and web services for their own purposes. In this way, AOV complements other distributed yet interoperable cyber resources and helps science planners, funding agencies, investigators, data specialists, and others to: assess status, identify overlap, fill gaps, optimize sampling design, refine network performance, clarify directions, access data, coordinate logistics, and collaborate to meet Arctic Observing goals. AOV is a companion application to the Arctic Research Mapping Application (armap.org), which is focused on general project information at a coarser level of granularity.
Variant Review with the Integrative Genomics Viewer.
Robinson, James T; Thorvaldsdóttir, Helga; Wenger, Aaron M; Zehir, Ahmet; Mesirov, Jill P
2017-11-01
Manual review of aligned reads for confirmation and interpretation of variant calls is an important step in many variant calling pipelines for next-generation sequencing (NGS) data. Visual inspection can greatly increase the confidence in calls, reduce the risk of false positives, and help characterize complex events. The Integrative Genomics Viewer (IGV) was one of the first tools to provide NGS data visualization, and it currently provides a rich set of tools for inspection, validation, and interpretation of NGS datasets, as well as other types of genomic data. Here, we present a short overview of IGV's variant review features for both single-nucleotide variants and structural variants, with examples from both cancer and germline datasets. IGV is freely available at https://www.igv.org Cancer Res; 77(21); e31-34. ©2017 AACR . ©2017 American Association for Cancer Research.
NGL Viewer: Web-based molecular graphics for large complexes.
Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W
2018-05-29
The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.
Dynamic lens and monovision 3D displays to improve viewer comfort.
Johnson, Paul V; Parnell, Jared Aq; Kim, Joohwan; Saunter, Christopher D; Love, Gordon D; Banks, Martin S
2016-05-30
Stereoscopic 3D (S3D) displays provide an additional sense of depth compared to non-stereoscopic displays by sending slightly different images to the two eyes. But conventional S3D displays do not reproduce all natural depth cues. In particular, focus cues are incorrect causing mismatches between accommodation and vergence: The eyes must accommodate to the display screen to create sharp retinal images even when binocular disparity drives the eyes to converge to other distances. This mismatch causes visual discomfort and reduces visual performance. We propose and assess two new techniques that are designed to reduce the vergence-accommodation conflict and thereby decrease discomfort and increase visual performance. These techniques are much simpler to implement than previous conflict-reducing techniques. The first proposed technique uses variable-focus lenses between the display and the viewer's eyes. The power of the lenses is yoked to the expected vergence distance thereby reducing the mismatch between vergence and accommodation. The second proposed technique uses a fixed lens in front of one eye and relies on the binocularly fused percept being determined by one eye and then the other, depending on simulated distance. We conducted performance tests and discomfort assessments with both techniques and compared the results to those of a conventional S3D display. The first proposed technique, but not the second, yielded clear improvements in performance and reductions in discomfort. This dynamic-lens technique therefore offers an easily implemented technique for reducing the vergence-accommodation conflict and thereby improving viewer experience.
Geology’s “Super Graphics” and the Public: Missed Opportunities for Geoscience Education
NASA Astrophysics Data System (ADS)
Clary, R. M.; Wandersee, J. H.
2009-12-01
The geosciences are very visual, as demonstrated by the illustration density of maps, graphs, photographs, and diagrams in introductory textbooks. As geoscience students progress, they are further exposed to advanced graphics, such as phase diagrams and subsurface seismic data visualizations. Photographs provide information from distant sites, while multivariate graphics supply a wealth of data for viewers to access. When used effectively, geology graphics have exceptional educational potential. However, geological graphic data are often presented in specialized formats, and are not easily interpreted by an uninformed viewer. In the Howe-Russell Geoscience Complex at Louisiana State University, there is a very large graphic (~ 30 ft x 6 ft) exhibited in a side hall, immediately off the main entrance hall. The graphic, divided into two obvious parts, displays in its lower section seismic data procured in the Gulf of Mexico, from near offshore Louisiana to the end of the continental shelf. The upper section of the graphic reveals drilling block information along the seismic line. Using Tufte’s model of graphic excellence and Paivio’s dual-coding theory, we analyzed the graphic in terms of data density, complexity, legibility, format, and multivariate presentation. We also observed viewers at the site on 5 occasions, and recorded their interactions with the graphic. This graphic can best be described as a Tufte “super graphic.” Its data are high in density and multivariate in nature. Various data sources are combined in a large format to provide a powerful example of a multitude of information within a convenient and condensed presentation. However, our analysis revealed that the graphic misses an opportunity to educate the non-geologist. The information and seismic “language” of the graphic is specific to the geology community, and the information is not interpreted for the lay viewer. The absence of title, descriptions, and symbol keys are detrimental. Terms are not defined. The absence of color keys and annotations is more likely to lead to an appreciation of graphic beauty, without concomitant scientific understanding. We further concluded that in its current location, constraints of space and reflective lighting prohibit the viewer from simultaneously accessing all subsurface data in a “big picture” view. The viewer is not able to fully comprehend the macro/micro aspects of the graphic design within the limited viewing space. The graphic is an example of geoscience education possibility, a possibility that is currently undermined and unrealized by lack of interpretation. Our analysis subsequently informed the development of a model to maximize the graphic’s educational potential, which can be applied to similar geological super graphics for enhanced public scientific understanding. Our model includes interactive displays that apply the auditory-visual dual coding approach to learning. Notations and aural explanations for geological features should increase viewer understanding, and produce an effective informal educational display.
NASA Astrophysics Data System (ADS)
Tejeda-Sánchez, C.; Muñoz-Nieto, A.; Rodríguez-Gonzálvez, P.
2018-05-01
Visualization and analysis use to be the final steps in Geomatics. This paper shows the workflow followed to set up a hybrid 3D archaeological viewer. Data acquisition of the site survey was done by means of low-cost close-range photogrammetric methods. With the aim not only to satisfy the general public but also the technicians, a large group of Geomatic products has been obtained (2d plans, 3d models, orthophotos, CAD models coming from vectorization, virtual anastylosis, and cross sections). Finally, all these products have been integrated into a three-dimensional archaeological information system. The hybrid archaeological viewer designed allows a metric and quality approach to the scientific analysis of the ruins, improving, thanks to the implementation of a database, and its potential for queries, the benefits of an ordinary topographic survey.
Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition
NASA Astrophysics Data System (ADS)
Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro
This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.
JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases.
Feng, Guangjie; Burton, Nick; Hill, Bill; Davidson, Duncan; Kerwin, Janet; Scott, Mark; Lindsay, Susan; Baldock, Richard
2005-03-09
Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.
Commercial Complexity and Local and Global Involvement in Programs: Effects on Viewer Responses.
ERIC Educational Resources Information Center
Oberman, Heiko; Thorson, Esther
A study investigated the effects of local (momentary) and global (whole program) involvement in program context and the effects of message complexity on the retention of television commercials. Sixteen commercials, categorized as simple video/simple audio through complex video/complex audio were edited into two globally high- and two globally…
Visualization of RNA structure models within the Integrative Genomics Viewer.
Busan, Steven; Weeks, Kevin M
2017-07-01
Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
NASA Astrophysics Data System (ADS)
McIntire, John P.; Osesina, O. Isaac; Bartley, Cecilia; Tudoreanu, M. Eduard; Havig, Paul R.; Geiselman, Eric E.
2012-06-01
Ensuring the proper and effective ways to visualize network data is important for many areas of academia, applied sciences, the military, and the public. Fields such as social network analysis, genetics, biochemistry, intelligence, cybersecurity, neural network modeling, transit systems, communications, etc. often deal with large, complex network datasets that can be difficult to interact with, study, and use. There have been surprisingly few human factors performance studies on the relative effectiveness of different graph drawings or network diagram techniques to convey information to a viewer. This is particularly true for weighted networks which include the strength of connections between nodes, not just information about which nodes are linked to other nodes. We describe a human factors study in which participants performed four separate network analysis tasks (finding a direct link between given nodes, finding an interconnected node between given nodes, estimating link strengths, and estimating the most densely interconnected nodes) on two different network visualizations: an adjacency matrix with a heat-map versus a node-link diagram. The results should help shed light on effective methods of visualizing network data for some representative analysis tasks, with the ultimate goal of improving usability and performance for viewers of network data displays.
Observers' cognitive states modulate how visual inputs relate to gaze control.
Kardan, Omid; Henderson, John M; Yourganov, Grigori; Berman, Marc G
2016-09-01
Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
LC-IM-TOF Instrument Control & Data Visualization Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
2011-05-12
Liquid Chromatography-Ion Mobility-time of Flight Instrument Control and Data Visualization software is designed to control instrument voltages for the Ion Mobility drift tube. It collects and stores information collected from the Agilent TOF instrument and analyses/displays the ion intensity information acquired. The software interface can be split into 3 categories -- Instrument Settings/Controls, Data Acquisition, and Viewer. The Instrument Settings/Controls prepares the instrument for Data Acquisition. The Viewer contains common objects that are used by Instrument Settings/Controls and Data Acquisition. Intensity information is collected in 1 nanosec bins and separated by TOF pulses called scans. A collection of scans aremore » stored side by side making up an accumulation. In order for the computer to keep up with the stream of data, 30-50 accumulations are commonly summed into a single frame. A collection of frames makes up an experiment. The Viewer software then takes the experiment and presents the data in several possible ways, each frame can be viewed in TOF bins or m/z (mass to charge ratio). The experiment can be viewed frame by frame, merging several frames, or by viewing the peak chromatogram. The user can zoom into the data, export data, and/or animate frames. Additional features include calibration of the data and even post-processing multiplexed data.« less
jsNMR: an embedded platform-independent NMR spectrum viewer.
Vosegaard, Thomas
2015-04-01
jsNMR is a lightweight NMR spectrum viewer written in JavaScript/HyperText Markup Language (HTML), which provides a cross-platform spectrum visualizer that runs on all computer architectures including mobile devices. Experimental (and simulated) datasets are easily opened in jsNMR by (i) drag and drop on a jsNMR browser window, (ii) by preparing a jsNMR file from the jsNMR web site, or (iii) by mailing the raw data to the jsNMR web portal. jsNMR embeds the original data in the HTML file, so a jsNMR file is a self-transforming dataset that may be exported to various formats, e.g. comma-separated values. The main applications of jsNMR are to provide easy access to NMR data without the need for dedicated software installed and to provide the possibility to visualize NMR spectra on web sites. Copyright © 2015 John Wiley & Sons, Ltd.
Predicting Moves-on-Stills for Comic Art Using Viewer Gaze Data.
Jain, Eakta; Sheikh, Yaser; Hodgins, Jessica
2016-01-01
Comic art consists of a sequence of panels of different shapes and sizes that visually communicate the narrative to the reader. The move-on-stills technique allows such still images to be retargeted for digital displays via camera moves. Today, moves-on-stills can be created by software applications given user-provided parameters for each desired camera move. The proposed algorithm uses viewer gaze as input to computationally predict camera move parameters. The authors demonstrate their algorithm on various comic book panels and evaluate its performance by comparing their results with a professional DVD.
The Saccharomyces Genome Database Variant Viewer.
Sheppard, Travis K; Hitz, Benjamin C; Engel, Stacia R; Song, Giltae; Balakrishnan, Rama; Binkley, Gail; Costanzo, Maria C; Dalusag, Kyla S; Demeter, Janos; Hellerstedt, Sage T; Karra, Kalpana; Nash, Robert S; Paskov, Kelley M; Skrzypek, Marek S; Weng, Shuai; Wong, Edith D; Cherry, J Michael
2016-01-04
The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is the authoritative community resource for the Saccharomyces cerevisiae reference genome sequence and its annotation. In recent years, we have moved toward increased representation of sequence variation and allelic differences within S. cerevisiae. The publication of numerous additional genomes has motivated the creation of new tools for their annotation and analysis. Here we present the Variant Viewer: a dynamic open-source web application for the visualization of genomic and proteomic differences. Multiple sequence alignments have been constructed across high quality genome sequences from 11 different S. cerevisiae strains and stored in the SGD. The alignments and summaries are encoded in JSON and used to create a two-tiered dynamic view of the budding yeast pan-genome, available at http://www.yeastgenome.org/variant-viewer. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Gesture Interaction Browser-Based 3D Molecular Viewer.
Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela
2016-01-01
The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education.
LONI visualization environment.
Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W
2006-06-01
Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.
Detecting and Remembering Simultaneous Pictures in a Rapid Serial Visual Presentation
ERIC Educational Resources Information Center
Potter, Mary C.; Fox, Laura F.
2009-01-01
Viewers can easily spot a target picture in a rapid serial visual presentation (RSVP), but can they do so if more than 1 picture is presented simultaneously? Up to 4 pictures were presented on each RSVP frame, for 240 to 720 ms/frame. In a detection task, the target was verbally specified before each trial (e.g., "man with violin"); in a…
Visions of our Planet's Atmosphere, Land & Oceans
NASA Technical Reports Server (NTRS)
Hasler, Arthur F.
2002-01-01
The NASA/NOAA Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to South Africa, Cape Town and Johannesburg using NASA Terra MODIS data, Landsat data and 1m IKONOS "Spy Satellite" data. Zoom in to any place South Africa using Earth Viewer 3D from Keyhole Inc. and Landsat data at 30 m resolution. Go back to the early weather satellite images from the 1960s and see them contrasted with the latest US and international global satellite weather movies including hurricanes & "tornadoes". See the latest visualizations of spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, Landsat 7 including 1 - min GOES rapid scan image sequences of Nov 9th 2001 Midwest tornadic thunderstorms and have them explained. See how High-Definition Television (HDTV) is revolutionizing the way we present science to the public. See dust storms and flooding in Africa and smoke plumes from fires in Mexico. See visualizations featured on the covers of Newsweek, TIME, National Geographic, Popular Science & on National & International Network TV. New computer software tools allow us to roam & zoom through massive global images e.g. Landsat tours of the US, and Africa, showing desert and mountain geology as well as seasonal changes in vegetation. See animations of the north and south polar ice packs and with icebergs on the coasts of Greenland and off the coast of Antarctica. Spectacular new visualizations of the global land, atmosphere & oceans are shown. Listen to the pulse of our planet. See how land vegetation, ocean plankton, clouds and temperatures respond to the sun & seasons. See vortexes and currents in the global oceans that bring up the nutrients to feed tiny algae and draw the fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. See the city lights, fishing fleets, gas flares and biomass burning of the Earth at night observed by the "night-vision" DMSP military satellite. The presentation will be made using the latest HDTV and video projection technology that is now done from a laptop computer through an entirely digital path.
WebChem Viewer: a tool for the easy dissemination of chemical and structural data sets
2014-01-01
Background Sharing sets of chemical data (e.g., chemical properties, docking scores, etc.) among collaborators with diverse skill sets is a common task in computer-aided drug design and medicinal chemistry. The ability to associate this data with images of the relevant molecular structures greatly facilitates scientific communication. There is a need for a simple, free, open-source program that can automatically export aggregated reports of entire chemical data sets to files viewable on any computer, regardless of the operating system and without requiring the installation of additional software. Results We here present a program called WebChem Viewer that automatically generates these types of highly portable reports. Furthermore, in designing WebChem Viewer we have also created a useful online web application for remotely generating molecular structures from SMILES strings. We encourage the direct use of this online application as well as its incorporation into other software packages. Conclusions With these features, WebChem Viewer enables interdisciplinary collaborations that require the sharing and visualization of small molecule structures and associated sets of heterogeneous chemical data. The program is released under the FreeBSD license and can be downloaded from http://nbcr.ucsd.edu/WebChemViewer. The associated web application (called “Smiley2png 1.0”) can be accessed through freely available web services provided by the National Biomedical Computation Resource at http://nbcr.ucsd.edu. PMID:24886360
Richards, Michael R; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Walther, Dirk B; Rosenstiel, Stephen; Sacksteder, James M
2015-04-01
There is disagreement in the literature concerning the importance of the mouth in overall facial attractiveness. Eye tracking provides an objective method to evaluate what people see. The objective of this study was to determine whether dental and facial attractiveness alters viewers' visual attention in terms of which area of the face (eyes, nose, mouth, chin, ears, or other) is viewed first, viewed the greatest number of times, and viewed for the greatest total time (duration) using eye tracking. Seventy-six viewers underwent 1 eye tracking session. Of these, 53 were white (49% female, 51% male). Their ages ranged from 18 to 29 years, with a mean of 19.8 years, and none were dental professionals. After being positioned and calibrated, they were shown 24 unique female composite images, each image shown twice for reliability. These images reflected a repaired unilateral cleft lip or 3 grades of dental attractiveness similar to those of grades 1 (near ideal), 7 (borderline treatment need), and 10 (definite treatment need) as assessed in the aesthetic component of the Index of Orthodontic Treatment Need (AC-IOTN). The images were then embedded in faces of 3 levels of attractiveness: attractive, average, and unattractive. During viewing, data were collected for the first location, frequency, and duration of each viewer's gaze. Observer reliability ranged from 0.58 to 0.92 (intraclass correlation coefficients) but was less than 0.07 (interrater) for the chin, which was eliminated from the study. Likewise, reliability for the area of first fixation was kappa less than 0.10 for both intrarater and interrater reliabilities; the area of first fixation was also removed from the data analysis. Repeated-measures analysis of variance showed a significant effect (P <0.001) for level of attractiveness by malocclusion by area of the face. For both number of fixations and duration of fixations, the eyes overwhelmingly were most salient, with the mouth receiving the second most visual attention. At times, the mouth and the eyes were statistically indistinguishable in viewers' gazes of fixation and duration. As the dental attractiveness decreased, the visual attention increased on the mouth, approaching that of the eyes. AC-IOTN grade 10 gained the most attention, followed by both AC-IOTN grade 7 and the cleft. AC-IOTN grade 1 received the least amount of visual attention. Also, lower dental attractiveness (AC-IOTN 7 and AC-IOTN 10) received more visual attention as facial attractiveness increased. Eye tracking indicates that dental attractiveness can alter the level of visual attention depending on the female models' facial attractiveness when viewed by laypersons. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Reimagining the microscope in the 21(st) century using the scalable adaptive graphics environment.
Mateevitsi, Victor; Patel, Tushar; Leigh, Jason; Levy, Bruce
2015-01-01
Whole-slide imaging (WSI), while technologically mature, remains in the early adopter phase of the technology adoption lifecycle. One reason for this current situation is that current methods of visualizing and using WSI closely follow long-existing workflows for glass slides. We set out to "reimagine" the digital microscope in the era of cloud computing by combining WSI with the rich collaborative environment of the Scalable Adaptive Graphics Environment (SAGE). SAGE is a cross-platform, open-source visualization and collaboration tool that enables users to access, display and share a variety of data-intensive information, in a variety of resolutions and formats, from multiple sources, on display walls of arbitrary size. A prototype of a WSI viewer app in the SAGE environment was created. While not full featured, it enabled the testing of our hypothesis that these technologies could be blended together to change the essential nature of how microscopic images are utilized for patient care, medical education, and research. Using the newly created WSI viewer app, demonstration scenarios were created in the patient care and medical education scenarios. This included a live demonstration of a pathology consultation at the International Academy of Digital Pathology meeting in Boston in November 2014. SAGE is well suited to display, manipulate and collaborate using WSIs, along with other images and data, for a variety of purposes. It goes beyond how glass slides and current WSI viewers are being used today, changing the nature of digital pathology in the process. A fully developed WSI viewer app within SAGE has the potential to encourage the wider adoption of WSI throughout pathology.
Reimagining the microscope in the 21st century using the scalable adaptive graphics environment
Mateevitsi, Victor; Patel, Tushar; Leigh, Jason; Levy, Bruce
2015-01-01
Background: Whole-slide imaging (WSI), while technologically mature, remains in the early adopter phase of the technology adoption lifecycle. One reason for this current situation is that current methods of visualizing and using WSI closely follow long-existing workflows for glass slides. We set out to “reimagine” the digital microscope in the era of cloud computing by combining WSI with the rich collaborative environment of the Scalable Adaptive Graphics Environment (SAGE). SAGE is a cross-platform, open-source visualization and collaboration tool that enables users to access, display and share a variety of data-intensive information, in a variety of resolutions and formats, from multiple sources, on display walls of arbitrary size. Methods: A prototype of a WSI viewer app in the SAGE environment was created. While not full featured, it enabled the testing of our hypothesis that these technologies could be blended together to change the essential nature of how microscopic images are utilized for patient care, medical education, and research. Results: Using the newly created WSI viewer app, demonstration scenarios were created in the patient care and medical education scenarios. This included a live demonstration of a pathology consultation at the International Academy of Digital Pathology meeting in Boston in November 2014. Conclusions: SAGE is well suited to display, manipulate and collaborate using WSIs, along with other images and data, for a variety of purposes. It goes beyond how glass slides and current WSI viewers are being used today, changing the nature of digital pathology in the process. A fully developed WSI viewer app within SAGE has the potential to encourage the wider adoption of WSI throughout pathology. PMID:26110092
Cultivation Effects: Television and Foreign Countries.
ERIC Educational Resources Information Center
Winterhoff-Spurk, Peter
This test of Marshall McLuhan's claim that increased exposure to television will develop a perception of the world as a "global village" used estimation of cognitive distance as an operational definition of the global village concept. The first phase of the study tested the hypothesis that "heavy" television viewers' estimates…
Ripoche, Hugues; Laine, Elodie; Ceres, Nicoletta; Carbone, Alessandra
2017-01-04
The database JET2 Viewer, openly accessible at http://www.jet2viewer.upmc.fr/, reports putative protein binding sites for all three-dimensional (3D) structures available in the Protein Data Bank (PDB). This knowledge base was generated by applying the computational method JET 2 at large-scale on more than 20 000 chains. JET 2 strategy yields very precise predictions of interacting surfaces and unravels their evolutionary process and complexity. JET2 Viewer provides an online intelligent display, including interactive 3D visualization of the binding sites mapped onto PDB structures and suitable files recording JET 2 analyses. Predictions were evaluated on more than 15 000 experimentally characterized protein interfaces. This is, to our knowledge, the largest evaluation of a protein binding site prediction method. The overall performance of JET 2 on all interfaces are: Sen = 52.52, PPV = 51.24, Spe = 80.05, Acc = 75.89. The data can be used to foster new strategies for protein-protein interactions modulation and interaction surface redesign. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Three-dimensional reconstruction from serial sections in PC-Windows platform by using 3D_Viewer.
Xu, Yi-Hua; Lahvis, Garet; Edwards, Harlene; Pitot, Henry C
2004-11-01
Three-dimensional (3D) reconstruction from serial sections allows identification of objects of interest in 3D and clarifies the relationship among these objects. 3D_Viewer, developed in our laboratory for this purpose, has four major functions: image alignment, movie frame production, movie viewing, and shift-overlay image generation. Color images captured from serial sections were aligned; then the contours of objects of interest were highlighted in a semi-automatic manner. These 2D images were then automatically stacked at different viewing angles, and their composite images on a projected plane were recorded by an image transform-shift-overlay technique. These composition images are used in the object-rotation movie show. The design considerations of the program and the procedures used for 3D reconstruction from serial sections are described. This program, with a digital image-capture system, a semi-automatic contours highlight method, and an automatic image transform-shift-overlay technique, greatly speeds up the reconstruction process. Since images generated by 3D_Viewer are in a general graphic format, data sharing with others is easy. 3D_Viewer is written in MS Visual Basic 6, obtainable from our laboratory on request.
BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets
Regulatory agencies increasingly apply benchmark dose (BMD) modeling to determine points of departure in human risk assessments. BMDExpress applies BMD modeling to transcriptomics datasets and groups genes to biological processes and pathways for rapid assessment of doses at whic...
Decoding facial blends of emotion: visual field, attentional and hemispheric biases.
Ross, Elliott D; Shayya, Luay; Champlain, Amanda; Monnot, Marilee; Prodan, Calin I
2013-12-01
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person's true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer's left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer's left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person's left ear, which also avoids the social stigma of eye-to-eye contact, one's ability to decode facial expressions should be enhanced. Published by Elsevier Inc.
BMDExpress Data Viewer: A Visualization Tool to Analyze ...
Regulatory agencies increasingly apply benchmark dose (BMD) modeling to determine points of departure in human risk assessments. BMDExpress applies BMD modeling to transcriptomics datasets and groups genes to biological processes and pathways for rapid assessment of doses at which biological perturbations occur. However, graphing and analytical capabilities within BMDExpress are limited, and the analysis of output files is challenging. We developed a web-based application, BMDExpress Data Viewer, for visualization and graphical analyses of BMDExpress output files. The software application consists of two main components: ‘Summary Visualization Tools’ and ‘Dataset Exploratory Tools’. We demonstrate through two case studies that the ‘Summary Visualization Tools’ can be used to examine and assess the distributions of probe and pathway BMD outputs, as well as derive a potential regulatory BMD through the modes or means of the distributions. The ‘Functional Enrichment Analysis’ tool presents biological processes in a two-dimensional bubble chart view. By applying filters of pathway enrichment p-value and minimum number of significant genes, we showed that the Functional Enrichment Analysis tool can be applied to select pathways that are potentially sensitive to chemical perturbations. The ‘Multiple Dataset Comparison’ tool enables comparison of BMDs across multiple experiments (e.g., across time points, tissues, or organisms, etc.). The ‘BMDL-BM
DeShazer, Mary K
2014-12-01
Photographic representations of women living with or beyond breast cancer have gained prominence in recent decades. Postmillennial visual narratives are both documentary projects and dialogic sites of self-construction and reader-viewer witness. After a brief overview of 30 years of breast cancer photography, this essay analyzes a collaborative photo-documentary by Stephanie Byram and Charlee Brodsky, Knowing Stephanie (2003), and a memorial photographic essay by Brodsky written ten years after Byram's death, "Remembering Stephanie" (2014). The ethics of representing women's postsurgical bodies and opportunities for reader-viewers to engage in "productive looking" (Kaja Silverman's concept) are the focal issues under consideration.
Manananggal - a novel viewer for alternative splicing events.
Barann, Matthias; Zimmer, Ralf; Birzele, Fabian
2017-02-21
Alternative splicing is an important cellular mechanism that can be analyzed by RNA sequencing. However, identification of splicing events in an automated fashion is error-prone. Thus, further validation is required to select reliable instances of alternative splicing events (ASEs). There are only few tools specifically designed for interactive inspection of ASEs and available visualization approaches can be significantly improved. Here, we present Manananggal, an application specifically designed for the identification of splicing events in next generation sequencing data. Manananggal includes a web application for visual inspection and a command line tool that allows for ASE detection. We compare the sashimi plots available in the IGV Viewer, the DEXSeq splicing plots and SpliceSeq to the Manananggal interface and discuss the advantages and drawbacks of these tools. We show that sashimi plots (such as those used by the IGV Viewer and SpliceSeq) offer a practical solution for simple ASEs, but also indicate short-comings for highly complex genes. Manananggal is an interactive web application that offers functions specifically tailored to the identification of alternative splicing events that other tools are lacking. The ability to select a subset of isoforms allows an easier interpretation of complex alternative splicing events. In contrast to SpliceSeq and the DEXSeq splicing plot, Manananggal does not obscure the gene structure by showing full transcript models that makes it easier to determine which isoforms are expressed and which are not.
van Doorn, Andrea J.; Wagemans, Johan
2016-01-01
Research on the influence of reference frames has generally focused on visual phenomena such as the oblique effect, the subjective visual vertical, the perceptual upright, and ambiguous figures. Another line of research concerns mental rotation studies in which participants had to discriminate between familiar or previously seen 2-D figures or pictures of 3-D objects and their rotated versions. In the present study, we disentangled the influence of the environmental and the viewer-centered reference frame, as classically done, by comparing the performances obtained in various picture and participant orientations. However, this time, the performance is the pictorial relief: the probed 3-D shape percept of the depicted object reconstructed from the local attitude settings of the participant. Comparisons between the pictorial reliefs based on different picture and participant orientations led to two major findings. First, in general, the pictorial reliefs were highly similar if the orientation of the depicted object was vertical with regard to the environmental or the viewer-centered reference frame. Second, a viewpoint-from-above interpretation could almost completely account for the shears occurring between the pictorial reliefs. More specifically, the shears could largely be considered as combinations of slants generated from the viewpoint-from-above, which was determined by the environmental as well as by the viewer-centered reference frame. PMID:27433329
User experience while viewing stereoscopic 3D television
Read, Jenny C.A.; Bohr, Iwo
2014-01-01
3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the ‘nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D. Practitioner Summary: Stereoscopic 3D (S3D) has been linked to visual discomfort and fatigue. Viewers watched the same movie in either 2D or stereo 3D (between-subjects design). Around 14% reported effects such as headache and eyestrain linked to S3D itself, while 8% report adverse effects attributable to 3D glasses or negative expectations. PMID:24874550
BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets(SoTC)
Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...
BMDExpress Data Viewer: A Visualization Tool to Analyze BMDExpress Datasets (STC symposium)
Background: Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to estimate acceptable exposure...
Visualization of Documents and Concepts in Neuroinformatics with the 3D-SE Viewer
Naud, Antoine; Usui, Shiro; Ueda, Naonori; Taniguchi, Tatsuki
2007-01-01
A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents) such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE) algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D) space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s) displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources. PMID:18974802
Visualization of Documents and Concepts in Neuroinformatics with the 3D-SE Viewer.
Naud, Antoine; Usui, Shiro; Ueda, Naonori; Taniguchi, Tatsuki
2007-01-01
A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents) such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE) algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D) space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s) displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources.
Effects of ensemble and summary displays on interpretations of geospatial uncertainty data.
Padilla, Lace M; Ruginski, Ian T; Creem-Regehr, Sarah H
2017-01-01
Ensemble and summary displays are two widely used methods to represent visual-spatial uncertainty; however, there is disagreement about which is the most effective technique to communicate uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentations to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features - or visual elements that attract bottom-up attention - as one possible source of diverging judgments made with ensemble and summary displays in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participant judgment. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer's task.
Schouten, Ben; Troje, Nikolaus F.; Vroomen, Jean; Verfaillie, Karl
2011-01-01
Background The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps. Methodology/Principal Findings In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds. Conclusions/Significance The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws. PMID:21373181
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Clucas, Jean; McCabe, R. Kevin; Plessel, Todd; Potter, R.; Cooper, D. M. (Technical Monitor)
1994-01-01
The Flow Analysis Software Toolkit, FAST, is a software environment for visualizing data. FAST is a collection of separate programs (modules) that run simultaneously and allow the user to examine the results of numerical and experimental simulations. The user can load data files, perform calculations on the data, visualize the results of these calculations, construct scenes of 3D graphical objects, and plot, animate and record the scenes. Computational Fluid Dynamics (CFD) visualization is the primary intended use of FAST, but FAST can also assist in the analysis of other types of data. FAST combines the capabilities of such programs as PLOT3D, RIP, SURF, and GAS into one environment with modules that share data. Sharing data between modules eliminates the drudgery of transferring data between programs. All the modules in the FAST environment have a consistent, highly interactive graphical user interface. Most commands are entered by pointing and'clicking. The modular construction of FAST makes it flexible and extensible. The environment can be custom configured and new modules can be developed and added as needed. The following modules have been developed for FAST: VIEWER, FILE IO, CALCULATOR, SURFER, TOPOLOGY, PLOTTER, TITLER, TRACER, ARCGRAPH, GQ, SURFERU, SHOTET, and ISOLEVU. A utility is also included to make the inclusion of user defined modules in the FAST environment easy. The VIEWER module is the central control for the FAST environment. From VIEWER, the user can-change object attributes, interactively position objects in three-dimensional space, define and save scenes, create animations, spawn new FAST modules, add additional view windows, and save and execute command scripts. The FAST User Guide uses text and FAST MAPS (graphical representations of the entire user interface) to guide the user through the use of FAST. Chapters include: Maps, Overview, Tips, Getting Started Tutorial, a separate chapter for each module, file formats, and system administration.
SCEC-VDO: A New 3-Dimensional Visualization and Movie Making Software for Earth Science Data
NASA Astrophysics Data System (ADS)
Milner, K. R.; Sanskriti, F.; Yu, J.; Callaghan, S.; Maechling, P. J.; Jordan, T. H.
2016-12-01
Researchers and undergraduate interns at the Southern California Earthquake Center (SCEC) have created a new 3-dimensional (3D) visualization software tool called SCEC Virtual Display of Objects (SCEC-VDO). SCEC-VDO is written in Java and uses the Visualization Toolkit (VTK) backend to render 3D content. SCEC-VDO offers advantages over existing 3D visualization software for viewing georeferenced data beneath the Earth's surface. Many popular visualization packages, such as Google Earth, restrict the user to views of the Earth from above, obstructing views of geological features such as faults and earthquake hypocenters at depth. SCEC-VDO allows the user to view data both above and below the Earth's surface at any angle. It includes tools for viewing global earthquakes from the U.S. Geological Survey, faults from the SCEC Community Fault Model, and results from the latest SCEC models of earthquake hazards in California including UCERF3 and RSQSim. Its object-oriented plugin architecture allows for the easy integration of new regional and global datasets, regardless of the science domain. SCEC-VDO also features rich animation capabilities, allowing users to build a timeline with keyframes of camera position and displayed data. The software is built with the concept of statefulness, allowing for reproducibility and collaboration using an xml file. A prior version of SCEC-VDO, which began development in 2005 under the SCEC Undergraduate Studies in Earthquake Information Technology internship, used the now unsupported Java3D library. Replacing Java3D with the widely supported and actively developed VTK libraries not only ensures that SCEC-VDO can continue to function for years to come, but allows for the export of 3D scenes to web viewers and popular software such as Paraview. SCEC-VDO runs on all recent 64-bit Windows, Mac OS X, and Linux systems with Java 8 or later. More information, including downloads, tutorials, and example movies created fully within SCEC-VDO is available here: http://scecvdo.usc.edu
Pardo, Carolina E; Carr, Ian M; Hoffman, Christopher J; Darst, Russell P; Markham, Alexander F; Bonthron, David T; Kladde, Michael P
2011-01-01
Bisulfite sequencing is a widely-used technique for examining cytosine DNA methylation at nucleotide resolution along single DNA strands. Probing with cytosine DNA methyltransferases followed by bisulfite sequencing (MAPit) is an effective technique for mapping protein-DNA interactions. Here, MAPit methylation footprinting with M.CviPI, a GC methyltransferase we previously cloned and characterized, was used to probe hMLH1 chromatin in HCT116 and RKO colorectal cancer cells. Because M.CviPI-probed samples contain both CG and GC methylation, we developed a versatile, visually-intuitive program, called MethylViewer, for evaluating the bisulfite sequencing results. Uniquely, MethylViewer can simultaneously query cytosine methylation status in bisulfite-converted sequences at as many as four different user-defined motifs, e.g. CG, GC, etc., including motifs with degenerate bases. Data can also be exported for statistical analysis and as publication-quality images. Analysis of hMLH1 MAPit data with MethylViewer showed that endogenous CG methylation and accessible GC sites were both mapped on single molecules at high resolution. Disruption of positioned nucleosomes on single molecules of the PHO5 promoter was detected in budding yeast using M.CviPII, increasing the number of enzymes available for probing protein-DNA interactions. MethylViewer provides an integrated solution for primer design and rapid, accurate and detailed analysis of bisulfite sequencing or MAPit datasets from virtually any biological or biochemical system.
exVis: a visual analysis tool for wind tunnel data
NASA Astrophysics Data System (ADS)
Deardorff, D. G.; Keeley, Leslie E.; Uselton, Samuel P.
1998-05-01
exVis is a software tool created to support interactive display and analysis of data collected during wind tunnel experiments. It is a result of a continuing project to explore the uses of information technology in improving the effectiveness of aeronautical design professionals. The data analysis goals are accomplished by allowing aerodynamicists to display and query data collected by new data acquisition systems and to create traditional wind tunnel plots from this data by interactively interrogating these images. exVis was built as a collection of distinct modules to allow for rapid prototyping, to foster evolution of capabilities, and to facilitate object reuse within other applications being developed. It was implemented using C++ and Open Inventor, commercially available object-oriented tools. The initial version was composed of three main classes. Two of these modules are autonomous viewer objects intended to display the test images (ImageViewer) and the plots (GraphViewer). The third main class is the Application User Interface (AUI) which manages the passing of data and events between the viewers, as well as providing a user interface to certain features. User feedback was obtained on a regular basis, which allowed for quick revision cycles and appropriately enhanced feature sets. During the development process additional classes were added, including a color map editor and a data set manager. The ImageViewer module was substantially rewritten to add features and to use the data set manager. The use of an object-oriented design was successful in allowing rapid prototyping and easy feature addition.
Immersive Visual Data Analysis For Geoscience Using Commodity VR Hardware
NASA Astrophysics Data System (ADS)
Kreylos, O.; Kellogg, L. H.
2017-12-01
Immersive visualization using virtual reality (VR) display technology offers tremendous benefits for the visual analysis of complex three-dimensional data like those commonly obtained from geophysical and geological observations and models. Unlike "traditional" visualization, which has to project 3D data onto a 2D screen for display, VR can side-step this projection and display 3D data directly, in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection. As a result, researchers can apply their spatial reasoning skills to virtual data in the same way they can to real objects or environments. The UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://keckcaves.org) has been developing VR methods for data analysis since 2005, but the high cost of VR displays has been preventing large-scale deployment and adoption of KeckCAVES technology. The recent emergence of high-quality commodity VR, spearheaded by the Oculus Rift and HTC Vive, has fundamentally changed the field. With KeckCAVES' foundational VR operating system, Vrui, now running natively on the HTC Vive, all KeckCAVES visualization software, including 3D Visualizer, LiDAR Viewer, Crusta, Nanotech Construction Kit, and ProtoShop, are now available to small labs, single researchers, and even home users. LiDAR Viewer and Crusta have been used for rapid response to geologic events including earthquakes and landslides, to visualize the impacts of sealevel rise, to investigate reconstructed paleooceanographic masses, and for exploration of the surface of Mars. The Nanotech Construction Kit is being used to explore the phases of carbon in Earth's deep interior, while ProtoShop can be used to construct and investigate protein structures.
Viewer Definitions of Violence.
ERIC Educational Resources Information Center
Robinson, Deanna Campbell; And Others
Segments of primetime and Saturday morning television programing were viewed by 225 people who then reported what criteria they used to assess violence on commercial and public television. The subjects also provided data on their visual media experience, their viewing habits, their viewing attitudes, and demographic characteristics. The subjects…
VisiOmatic: Celestial image viewer
NASA Astrophysics Data System (ADS)
Bertin, Emmanuel; Marmo, Chiara; Pillay, Ruven
2014-08-01
VisiOmatic is a web client for IIPImage (ascl:1408.009) and is used to visualize and navigate through large science images from remote locations. It requires STIFF (ascl:1110.006), is based on the Leaflet Javascript library, and works on both touch-based and mouse-based devices.
Genre Matters: A Comparative Study on the Entertainment Effects of 3D in Cinematic Contexts
NASA Astrophysics Data System (ADS)
Ji, Qihao; Lee, Young Sun
2014-09-01
Built upon prior comparative studies of 3D and 2D films, the current project investigates the effects of 2D and 3D on viewers' perception of enjoyment, narrative engagement, presence, involvement, and flow across three movie genres (Action/fantasy vs. Drama vs. Documentary). Through a 2 by 3 mixed factorial design, participants (n = 102) were separated into two viewing conditions (2D and 3D) and watched three 15-min film segments. Result suggested both visual production methods are equally efficient in terms of eliciting people's enjoyment, narrative engagement, involvement, flow and presence, no effects of visual production method was found. In addition, through examining the genre effects in both 3D and 2D conditions, we found that 3D works better for action movies than documentaries in terms of eliciting viewers' perception of enjoyment and presence, similarly, it improves views' narrative engagement for documentaries than dramas substantially. Implications and limitations are discussed in detail.
The ADS All Sky Survey: footprints of astronomy literature, in the sky
NASA Astrophysics Data System (ADS)
Pepe, Alberto; Goodman, A. A.; Muench, A. A.; Seamless Astronomy Group at the CfA
2014-01-01
The ADS All-Sky Survey (ADSASS) aims to transform the NASA Astrophysics Data System (ADS), widely known for its unrivaled value as a literature resource for astronomers, into a data resource. The ADS is not a data repository per se, but it implicitly contains valuable holdings of astronomical data, in the form of images, tables and object references contained within articles. The objective of the ADSASS effort is to extract these data and make them discoverable and available through existing data viewers. In this talk, the ADSASS viewer - http://adsass.org/ - will be presented: a sky heatmap of astronomy articles based on the celestial objects they reference. The ADSASS viewer is as an innovative research and visual search tool for it allows users to explore astronomical literature based on celestial location, rather than keyword string. The ADSASS is a NASA-funded initiative carried out by the Seamless Astronomy Group at the Harvard-Smithsonian Center for Astrophysics.
The Czech Hydrometeorological Institute's severe storm nowcasting system
NASA Astrophysics Data System (ADS)
Novak, Petr
2007-02-01
To satisfy requirements for operational severe weather monitoring and prediction, the Czech Hydrometeorological Institute (CHMI) has developed a severe storm nowcasting system which uses weather radar data as its primary data source. Previous CHMI studies identified two methods of radar echo prediction, which were then implemented during 2003 into the Czech weather radar network operational weather processor. The applications put into operations were the Continuity Tracking Radar Echoes by Correlation (COTREC) algorithm, and an application that predicts future radar fields using the wind field derived from the geopotential at 700 hPa calculated from a local numerical weather prediction model (ALADIN). To ensure timely delivery of the prediction products to the users, the forecasts are implemented into a web-based viewer (JSMeteoView) that has been developed by the CHMI Radar Department. At present, this viewer is used by all CHMI forecast offices for versatile visualization of radar and other meteorological data (Meteosat, lightning detection, NWP LAM output, SYNOP data) in the Internet/Intranet environment, and the viewer has detailed geographical navigation capabilities.
A Java-based tool for creating KML files from GPS waypoints
NASA Astrophysics Data System (ADS)
Kinnicutt, P. G.; Rivard, C.; Rimer, S.
2008-12-01
Google Earth provides a free tool with powerful capabilities for visualizing geoscience images and data. Commercial software tools exist for doing sophisticated digitizing and spatial modeling , but for the purposes of presentation, visualization and overlaying aerial images with data Google Earth provides much of the functionality. Likewise, with current technologies in GPS (Global Positioning System) systems and with Google Earth Plus, it is possible to upload GPS waypoints, tracks and routes directly into Google Earth for visualization. However, older technology GPS units and even low-cost GPS units found today may lack the necessary communications interface to a computer (e.g. no Bluetooth, no WiFi, no USB, no Serial, etc.) or may have an incompatible interface, such as a Serial port but no USB adapter available. In such cases, any waypoints, tracks and routes saved in the GPS unit or recorded in a field notebook must be manually transferred to a computer for use in a GIS system or other program. This presentation describes a Java-based tool developed by the author which enables users to enter GPS coordinates in a user-friendly manner, then save these coordinates in a Keyhole MarkUp Language (KML) file format, for visualization in Google Earth. This tool either accepts user-interactive input or accepts input from a CSV (Comma Separated Value) file, which can be generated from any spreadsheet program. This tool accepts input in the form of lat/long or UTM (Universal Transverse Mercator) coordinates. This presentation describes this system's applicability through several small case studies. This free and lightweight tool simplifies the task of manually inputting GPS data into Google Earth for people working in the field without an automated mechanism for uploading the data; for instance, the user may not have internet connectivity or may not have the proper hardware or software. Since it is a Java application and not a web- based tool, it can be installed on one's field laptop and the GPS data can be manually entered without the need for internet connectivity. This tool provides a table view of the GPS data, but lacks a KML viewer to view the data overlain on top of an aerial view, as this viewer functionality is provided in Google Earth. The tool's primary contribution lies in its more convenient method for entering the GPS data manually when automated technologies are not available.
Fermilab | Publications and Videos
International Linear Collider Global Design Effort. Science Node The Science Node is a free online publication , viewers can catch a true behind-the-scenes look of the United States' premier particle physics laboratory
Benchmark Dose (BMD) modelling is a mathematical approach used to determine where a dose-response change begins to take place relative to controls following chemical exposure. BMDs are being increasingly applied in regulatory toxicology to determine points of departure. BMDExpres...
Toggle navigation National Water Center Home (current) Visualize Map Image Viewer What's New About National Water Center National Water Model National Weather Service National Oceanic and Atmospheric Administration Welcome to the Office of Water Prediction Scientific excellence and innovation driving water
Teaching for Visual Literacy: 50 Great Young Adult Films.
ERIC Educational Resources Information Center
Teasley, Alan B.; Wilder, Ann
1994-01-01
Discusses how films portraying the lives of young adults can serve as the basis for a "viewer response" study of film and filmmaking. Lists and summarizes 50 films found to be suitable for teaching to young adults. Provides criteria by which the films were selected. (HB)
PC-Based Virtual Reality for CAD Model Viewing
ERIC Educational Resources Information Center
Seth, Abhishek; Smith, Shana S.-F.
2004-01-01
Virtual reality (VR), as an emerging visualization technology, has introduced an unprecedented communication method for collaborative design. VR refers to an immersive, interactive, multisensory, viewer-centered, 3D computer-generated environment and the combination of technologies required to build such an environment. This article introduces the…
Viewers' perceptions of a YouTube music therapy session video.
Gregory, Dianne; Gooding, Lori G
2013-01-01
Recent research revealed diverse content and varying levels of quality in YouTube music therapy videos and prompted questions about viewers' discrimination abilities. This study compares ratings of a YouTube music therapy session video by viewers with different levels of music therapy expertise to determine video elements related to perceptions of representational quality. Eighty-one participants included 25 novices (freshmen and sophomores in an introductory music therapy course), 25 pre-interns (seniors and equivalency students who had completed all core Music Therapy courses), 26 professionals (MT-BC or MT-BC eligibility) with a mean of 1.75 years of experience, and an expert panel of 5 MT-BC professionals with a mean of 11 years of experience in special education. After viewing a music therapy special education video that in previous research met basic competency criteria and professional standards of the American Music Therapy Association, participants completed a 16-item questionnaire. Novices' ratings were more positive (less discriminating) compared to experienced viewers' neutral or negative ratings. Statistical analysis (ANOVA) of novice, pre-intern, and professional ratings of all items revealed significant differences p, .05) for specific therapy content and for a global rating of representational quality. Experienced viewers' ratings were similar to the expert panel's ratings. Content analysis of viewers' reasons for their representational quality ratings corroborated ratings of therapy-specific content. A video that combines and clearly depicts therapy objectives, client improvement, and the effectiveness of music within a therapeutic intervention best represent the music therapy profession in a public social platform like YouTube.
Darde, Thomas A.; Sallou, Olivier; Becker, Emmanuelle; Evrard, Bertrand; Monjeaud, Cyril; Le Bras, Yvan; Jégou, Bernard; Collin, Olivier; Rolland, Antoine D.; Chalmel, Frédéric
2015-01-01
We report the development of the ReproGenomics Viewer (RGV), a multi- and cross-species working environment for the visualization, mining and comparison of published omics data sets for the reproductive science community. The system currently embeds 15 published data sets related to gametogenesis from nine model organisms. Data sets have been curated and conveniently organized into broad categories including biological topics, technologies, species and publications. RGV's modular design for both organisms and genomic tools enables users to upload and compare their data with that from the data sets embedded in the system in a cross-species manner. The RGV is freely available at http://rgv.genouest.org. PMID:25883147
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Kim, Hae-Kwang
2007-12-01
In this paper, we introduce a graphics to Scalable Vector Graphics (SVG) adaptation framework with a mechanism of vector graphics transmission to overcome the shortcoming of real-time representation and interaction experiences of 3D graphics application running on mobile devices. We therefore develop an interactive 3D visualization system based on the proposed framework for rapidly representing a 3D scene on mobile devices without having to download it from the server. Our system scenario is composed of a client viewer and a graphic to SVG adaptation server. The client viewer offers the user to access to the same 3D contents with different devices according to consumer interactions.
Proper poster presentation: a visual and verbal ABC.
Wright, V; Moll, J M
1987-08-01
The 58 posters exhibited at the 1985 Annual General Meeting of the British Society for Rheumatology have been analysed for 13 variables considered important in the construction of a good poster. In particular the attributes of information, simplicity and visual attractiveness were studied. The time spent by viewers was also measured for one selected poster each in immunology, biochemistry, therapeutics and clinical medicine. On the basis of this survey, nine recommendations for proper presentation were made.
The Cloud-Based Integrated Data Viewer (IDV)
NASA Astrophysics Data System (ADS)
Fisher, Ward
2015-04-01
Maintaining software compatibility across new computing environments and the associated underlying hardware is a common problem for software engineers and scientific programmers. While there are a suite of tools and methodologies used in traditional software engineering environments to mitigate this issue, they are typically ignored by developers lacking a background in software engineering. The result is a large body of software which is simultaneously critical and difficult to maintain. Visualization software is particularly vulnerable to this problem, given the inherent dependency on particular graphics hardware and software API's. The advent of cloud computing has provided a solution to this problem, which was not previously practical on a large scale; Application Streaming. This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations, with little-to-no re-engineering required. Through application streaming we are able to bring the same visualization to a desktop, a netbook, a smartphone, and the next generation of hardware, whatever it may be. Unidata has been able to harness Application Streaming to provide a tablet-compatible version of our visualization software, the Integrated Data Viewer (IDV). This work will examine the challenges associated with adapting the IDV to an application streaming platform, and include a brief discussion of the underlying technologies involved. We will also discuss the differences between local software and software-as-a-service.
Challenging Popular Media's Control by Teaching Critical Viewing.
ERIC Educational Resources Information Center
Couch, Richard A.
The purpose of this paper is to express the importance of visual/media literacy and the teaching of critical television viewing. An awareness of the properties and characteristics of television--including camera angles and placement, editing, and emotionally involving subject matter--aids viewers in the critical viewing process. The knowledge of…
77 FR 16688 - Review of the Emergency Alert System
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
... the approximately three and one half-year window it is providing for intermediary device users is..., including the originator, event, location and the valid time period of the EAS message, from the CAP text... event, which it believes would provide more visual information to alert message viewers. The Commission...
Grasp Preparation Improves Change Detection for Congruent Objects
ERIC Educational Resources Information Center
Symes, Ed; Tucker, Mike; Ellis, Rob; Vainio, Lari; Ottoboni, Giovanni
2008-01-01
A series of experiments provided converging support for the hypothesis that action preparation biases selective attention to action-congruent object features. When visual transients are masked in so-called "change-blindness scenes," viewers are blind to substantial changes between 2 otherwise identical pictures that flick back and forth. The…
Viewer-centered and body-centered frames of reference in direct visuomotor transformations.
Carrozzo, M; McIntyre, J; Zago, M; Lacquaniti, F
1999-11-01
It has been hypothesized that the end-point position of reaching may be specified in an egocentric frame of reference. In most previous studies, however, reaching was toward a memorized target, rather than an actual target. Thus, the role played by sensorimotor transformation could not be disassociated from the role played by storage in short-term memory. In the present study the direct process of sensorimotor transformation was investigated in reaching toward continuously visible targets that need not be stored in memory. A virtual reality system was used to present visual targets in different three-dimensional (3D) locations in two different tasks, one with visual feedback of the hand and arm position (Seen Hand) and the other without such feedback (Unseen Hand). In the Seen Hand task, the axes of maximum variability and of maximum contraction converge toward the mid-point between the eyes. In the Unseen Hand task only the maximum contraction correlates with the sight-line and the axes of maximum variability are not viewer-centered but rotate anti-clockwise around the body and the effector arm during the move from the right to the left workspace. The bulk of findings from these and previous experiments support the hypothesis of a two-stage process, with a gradual transformation from viewer-centered to body-centered and arm-centered coordinates. Retinal, extra-retinal and arm-related signals appear to be progressively combined in superior and inferior parietal areas, giving rise to egocentric representations of the end-point position of reaching.
EEGVIS: A MATLAB Toolbox for Browsing, Exploring, and Viewing Large Datasets.
Robbins, Kay A
2012-01-01
Recent advances in data monitoring and sensor technology have accelerated the acquisition of very large data sets. Streaming data sets from instrumentation such as multi-channel EEG recording usually must undergo substantial pre-processing and artifact removal. Even when using automated procedures, most scientists engage in laborious manual examination and processing to assure high quality data and to indentify interesting or problematic data segments. Researchers also do not have a convenient method of method of visually assessing the effects of applying any stage in a processing pipeline. EEGVIS is a MATLAB toolbox that allows users to quickly explore multi-channel EEG and other large array-based data sets using multi-scale drill-down techniques. Customizable summary views reveal potentially interesting sections of data, which users can explore further by clicking to examine using detailed viewing components. The viewer and a companion browser are built on our MoBBED framework, which has a library of modular viewing components that can be mixed and matched to best reveal structure. Users can easily create new viewers for their specific data without any programming during the exploration process. These viewers automatically support pan, zoom, resizing of individual components, and cursor exploration. The toolbox can be used directly in MATLAB at any stage in a processing pipeline, as a plug-in for EEGLAB, or as a standalone precompiled application without MATLAB running. EEGVIS and its supporting packages are freely available under the GNU general public license at http://visual.cs.utsa.edu/eegvis.
Shifting attention in viewer- and object-based reference frames after unilateral brain injury.
List, Alexandra; Landau, Ayelet N; Brooks, Joseph L; Flevaris, Anastasia V; Fortenbaugh, Francesca C; Esterman, Michael; Van Vleet, Thomas M; Albrecht, Alice R; Alvarez, Bryan D; Robertson, Lynn C; Schendel, Krista
2011-06-01
The aims of the present study were to investigate the respective roles that object- and viewer-based reference frames play in reorienting visual attention, and to assess their influence after unilateral brain injury. To do so, we studied 16 right hemisphere injured (RHI) and 13 left hemisphere injured (LHI) patients. We used a cueing design that manipulates the location of cues and targets relative to a display comprised of two rectangles (i.e., objects). Unlike previous studies with patients, we presented all cues at midline rather than in the left or right visual fields. Thus, in the critical conditions in which targets were presented laterally, reorienting of attention was always from a midline cue. Performance was measured for lateralized target detection as a function of viewer-based (contra- and ipsilesional sides) and object-based (requiring reorienting within or between objects) reference frames. As expected, contralesional detection was slower than ipsilesional detection for the patients. More importantly, objects influenced target detection differently in the contralesional and ipsilesional fields. Contralesionally, reorienting to a target within the cued object took longer than reorienting to a target in the same location but in the uncued object. This finding is consistent with object-based neglect. Ipsilesionally, the means were in the opposite direction. Furthermore, no significant difference was found in object-based influences between the patient groups (RHI vs. LHI). These findings are discussed in the context of reference frames used in reorienting attention for target detection. Published by Elsevier Ltd.
Visualizing global change: earth and biodiversity sciences for museum settings using HDTV
NASA Astrophysics Data System (ADS)
Duba, A.; Gardiner, N.; Kinzler, R.; Trakinski, V.
2006-12-01
Science Bulletins, a production group at the American Museum of Natural History (New York, USA), brings biological and Earth system science data and concepts to over 10 million visitors per year at 27 institutions around the U.S.A. Our target audience is diverse, from novice to expert. News stories and visualizations use the capabilities of satellite imagery to focus public attention on four general themes: human influences on species and ecosystems across all observable spatial extents; biotic feedbacks with the Earth's physical system; characterizing species and ecosystems; and recent events such as natural changes to ecosystems, major findings and publications, or recent syntheses. For Earth science, we use recent natural events to explain the broad scientific concepts of tectonic activity and the processes that underlie climate and weather events. Visualizations show the global, dynamic distribution of atmospheric constituents, ocean temperature and temperature anomaly, and sea ice. Long-term changes are set in contrast to seasonal and longer-term cycles so that viewers appreciate the variety of forces that affect Earth's physical system. We illustrate concepts at a level appropriate for a broad audience to learn more about the dynamic nature of Earth's biota and physical processes. Programming also includes feature stories that explain global change phenomena from the perspectives of eminent scientists and managers charged with implementing public policy based on the best available science. Over the past two and one-half years, biological science stories have highlighted applied research addressing lemur conservation in Madagascar, marine protected areas in the Bahamas, effects of urban sprawl on wood turtles in New England, and taxonomic surveys of marine jellies in Monterey Bay. Earth science stories have addressed the volcanic history of present-day Yellowstone National Park, tsunamis, the disappearance of tropical mountain glaciers, the North Atlantic Oscillation, and the oxygenation of the atmosphere. All of these visualizations and HD videos are accessible via the worldwide web with accompanying explanatory material. Periodic surveys of visitors indicate that these media are popular and are effective at communicating important biological and Earth system science concepts to the general public.
A Real-time 3D Visualization of Global MHD Simulation for Space Weather Forecasting
NASA Astrophysics Data System (ADS)
Murata, K.; Matsuoka, D.; Kubo, T.; Shimazu, H.; Tanaka, T.; Fujita, S.; Watari, S.; Miyachi, H.; Yamamoto, K.; Kimura, E.; Ishikura, S.
2006-12-01
Recently, many satellites for communication networks and scientific observation are launched in the vicinity of the Earth (geo-space). The electromagnetic (EM) environments around the spacecraft are always influenced by the solar wind blowing from the Sun and induced electromagnetic fields. They occasionally cause various troubles or damages, such as electrification and interference, to the spacecraft. It is important to forecast the geo-space EM environment as well as the ground weather forecasting. Owing to the recent remarkable progresses of super-computer technologies, numerical simulations have become powerful research methods in the solar-terrestrial physics. For the necessity of space weather forecasting, NICT (National Institute of Information and Communications Technology) has developed a real-time global MHD simulation system of solar wind-magnetosphere-ionosphere couplings, which has been performed on a super-computer SX-6. The real-time solar wind parameters from the ACE spacecraft at every one minute are adopted as boundary conditions for the simulation. Simulation results (2-D plots) are updated every 1 minute on a NICT website. However, 3D visualization of simulation results is indispensable to forecast space weather more accurately. In the present study, we develop a real-time 3D webcite for the global MHD simulations. The 3-D visualization results of simulation results are updated every 20 minutes in the following three formats: (1)Streamlines of magnetic field lines, (2)Isosurface of temperature in the magnetosphere and (3)Isoline of conductivity and orthogonal plane of potential in the ionosphere. For the present study, we developed a 3-D viewer application working on Internet Explorer browser (ActiveX) is implemented, which was developed on the AVS/Express. Numerical data are saved in the HDF5 format data files every 1 minute. Users can easily search, retrieve and plot past simulation results (3D visualization data and numerical data) by using the STARS (Solar-terrestrial data Analysis and Reference System). The STARS is a data analysis system for satellite and ground-based observation data for solar-terrestrial physics.
What causes the facing-the-viewer bias in biological motion?
Weech, Séamas; McAdam, Matthew; Kenny, Sophie; Troje, Nikolaus F
2014-10-13
Orthographically projected biological motion point-light displays are generally ambiguous with respect to their orientation in depth, yet observers consistently prefer the facing-the-viewer interpretation. There has been discussion as to whether this bias can be attributed to the social relevance of biological motion stimuli or relates to local, low-level stimulus properties. In the present study we address this question. In Experiment 1, we compared the facing-the-viewer bias produced by a series of four stick figures and three human silhouettes that differed in posture, gender, and the presence versus absence of walking motion. Using a paradigm in which we asked observers to indicate the spinning direction of these figures, we found no bias when participants observed silhouettes, whereas a pronounced degree of bias was elicited by most stick figures. We hypothesized that the ambiguous surface normals on the lines and dots that comprise stick figures are prone to a visual bias that assumes surfaces to be convex. The local surface orientations of the occluding contours of silhouettes are unambiguous, and as such the convexity bias does not apply. In Experiment 2, we tested the role of local features in ambiguous surface perception by adding dots to the elbows and knees of silhouettes. We found biases consistent with the facing directions implied by a convex body surface. The results unify a number of findings regarding the facing-the-viewer bias. We conclude that the facing-the-viewer bias is established at the level of surface reconstruction from local image features rather than on a semantic level. © 2014 ARVO.
NASA Astrophysics Data System (ADS)
Cody, R. P.; Manley, W. F.; Gaylord, A. G.; Kassin, A.; Villarreal, S.; Barba, M.; Dover, M.; Escarzaga, S. M.; Habermann, T.; Kozimor, J.; Score, R.; Tweedie, C. E.
2016-12-01
To better assess progress in Arctic Observing made by U.S. SEARCH, NSF AON, SAON, and related initiatives, an updated version of the Arctic Observing Viewer (AOV; http://ArcticObservingViewer.org) has been released. This web mapping application and information system conveys the who, what, where, and when of "data collection sites" - the precise locations of monitoring assets, observing platforms, and wherever repeat marine or terrestrial measurements have been taken. Over 8000 sites across the circum-arctic are documented including a range of boreholes, ship tracks, buoys, towers, sampling stations, sensor networks, vegetation plots, stream gauges, ice cores, observatories, and more. Contributing partners are the U.S. NSF, ACADIS, ADIwg, AOOS, a2dc, AON, CAFF, GINA, IASOA, INTERACT, NASA ABoVE, and USGS, among others. Users can visualize, navigate, select, search, draw, print, view details, and follow links to obtain a comprehensive perspective of environmental monitoring efforts. We continue to develop, populate, and enhance AOV. Recent improvements include: a more intuitive and functional search tool, a modern cross-platform interface using javascript and HTML5, and hierarchical ISO metadata coupled with RESTful web services & metadata XLinks to span the data life cycle (from project planning to establishment of data collection sites to release of scientific datasets). Additionally, through collaborations with the Barrow Area Information Database (BAID, www.barrowmapped.org) we are exploring linkages with datacenters and have developed a prototype dashboard application that allows users to explore data services in the AOV application. AOV is founded on principles of interoperability, such that agencies and organizations can use the AOV Viewer and web services for their own purposes. In this way, AOV complements other distributed yet interoperable cyber resources and helps science planners, funding agencies, investigators, data specialists, and others to: assess status, identify overlap, fill gaps, optimize sampling design, refine network performance, clarify directions, access data, coordinate logistics, and collaborate to meet Arctic Observing goals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quock, D. E. R.; Cianciarulo, M. B.; APS Engineering Support Division
2007-01-01
The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, themore » necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.« less
NASA Astrophysics Data System (ADS)
Hellman, Brandon; Bosset, Erica; Ender, Luke; Jafari, Naveed; McCann, Phillip; Nguyen, Chris; Summitt, Chris; Wang, Sunglin; Takashima, Yuzuru
2017-11-01
The ray formalism is critical to understanding light propagation, yet current pedagogy relies on inadequate 2D representations. We present a system in which real light rays are visualized through an optical system by using a collimated laser bundle of light and a fog chamber. Implementation for remote and immersive access is enabled by leveraging a commercially available 3D viewer and gesture-based remote controlling of the tool via bi-directional communication over the Internet.
Using HIPPO Data for Formal and Informal Science Education
NASA Astrophysics Data System (ADS)
Rockwell, A.; Hatheway, B.; Zondlo, M. A.
2012-12-01
The HIAPER Pole-to-Pole Observations (HIPPO) field project recently concluded its mission to map greenhouse gases and black carbon from the Arctic to the Antarctic using the NSF/NCAR Gulfstream V. HIPPO resulted in visually-rich and easy-to-understand altitude/latitude curtain plots of several trace gases and black carbon, from five seasons during 2009-2011. The data and curtain plots are available for both formal and informal science education to support the instruction of atmospheric science and Earth systems. Middle and high school activities have been developed using these data and curtain plots, and an undergraduate course based on HIPPO data - Global Air Pollution - is offered at Princeton University. The visually stimulating curtain plots are unique in that a wide range of people can comprehend them because they provide an easy-to-understand picture of the global distribution of chemical species for non-scientists or beginning users, while also displaying valuable detailed information for the advanced viewer. The plots are a powerful graphical tool that can be used to communicate climate science because they illustrate the concepts of how trace gas distributions are linked to the large-scale dynamics of the Earth; show seasonal changes in distribution and concentrations; and use the same display format for each tracer. In order to connect people to the data, a multi-faceted and engaging public information program and supporting educational materials for HIPPO were developed. These provided a unique look into global field research and included social media platforms such as Facebook and Twitter; a range of videos from simple motion graphics to detailed narratives; both printed and online written materials; and mass-media publications.
Ali, M A; Ahsan, Z; Amin, M; Latif, S; Ayyaz, A; Ayyaz, M N
2016-05-01
Globally, disease surveillance systems are playing a significant role in outbreak detection and response management of Infectious Diseases (IDs). However, in developing countries like Pakistan, epidemic outbreaks are difficult to detect due to scarcity of public health data and absence of automated surveillance systems. Our research is intended to formulate an integrated service-oriented visual analytics architecture for ID surveillance, identify key constituents and set up a baseline for easy reproducibility of such systems in the future. This research focuses on development of ID-Viewer, which is a visual analytics decision support system for ID surveillance. It is a blend of intelligent approaches to make use of real-time streaming data from Emergency Departments (EDs) for early outbreak detection, health care resource allocation and epidemic response management. We have developed a robust service-oriented visual analytics architecture for ID surveillance, which provides automated mechanisms for ID data acquisition, outbreak detection and epidemic response management. Classification of chief-complaints is accomplished using dynamic classification module, which employs neural networks and fuzzy-logic to categorize syndromes. Standard routines by Center for Disease Control (CDC), i.e. c1-c3 (c1-mild, c2-medium and c3-ultra), and spatial scan statistics are employed for detection of temporal and spatio-temporal disease outbreaks respectively. Prediction of imminent disease threats is accomplished using support vector regression for early warnings and response planning. Geographical visual analytics displays are developed that allow interactive visualization of syndromic clusters, monitoring disease spread patterns, and identification of spatio-temporal risk zones. We analysed performance of surveillance framework using ID data for year 2011-2015. Dynamic syndromic classifier is able to classify chief-complaints to appropriate syndromes with high classification accuracy. Outbreak detection methods are able to detect the ID outbreaks in start of epidemic time zones. Prediction model is able to forecast dengue trend for 20 weeks ahead with nominal normalized root mean square error of 0.29. Interactive geo-spatiotemporal displays, i.e. heat-maps, and choropleth are shown in respective sections. The proposed framework will set a standard and provide necessary details for future implementation of such a system for resource-constrained regions. It will improve early outbreak detection attributable to natural and man-made biological threats, monitor spatio-temporal epidemic trends and provide assurance that an outbreak has, or has not occurred. Advanced analytics features will be beneficial in timely organization/formulation of health management policies, disease control activities and efficient health care resource allocation. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
MoleCoolQt – a molecule viewer for charge-density research
Hübschle, Christian B.; Dittrich, Birger
2011-01-01
MoleCoolQt is a molecule viewer for charge-density research. Features include the visualization of local atomic coordinate systems in multipole refinements based on the Hansen and Coppens formalism as implemented, for example, in the XD suite. Residual peaks and holes from XDfft are translated so that they appear close to the nearest atom of the asymmetric unit. Critical points from a topological analysis of the charge density can also be visualized. As in the program MolIso, color-mapped isosurfaces can be generated with a simple interface. Apart from its visualization features the program interactively helps in assigning local atomic coordinate systems and local symmetry, which can be automatically detected and altered. Dummy atoms – as sometimes required for local atomic coordinate systems – are calculated on demand; XD system files are updated after changes. When using the invariom database, potential scattering factor assignment problems can be resolved by the use of an interactive dialog. The following file formats are supported: XD, MoPro, SHELX, GAUSSIAN (com, FChk, cube), CIF and PDB. MoleCoolQt is written in C++ using the Qt4 library, has a user-friendly graphical user interface, and is available for several flavors of Linux, Windows and MacOS. PMID:22477783
minepath.org: a free interactive pathway analysis web server.
Koumakis, Lefteris; Roussos, Panos; Potamias, George
2017-07-03
( www.minepath.org ) is a web-based platform that elaborates on, and radically extends the identification of differentially expressed sub-paths in molecular pathways. Besides the network topology, the underlying MinePath algorithmic processes exploit exact gene-gene molecular relationships (e.g. activation, inhibition) and are able to identify differentially expressed pathway parts. Each pathway is decomposed into all its constituent sub-paths, which in turn are matched with corresponding gene expression profiles. The highly ranked, and phenotype inclined sub-paths are kept. Apart from the pathway analysis algorithm, the fundamental innovation of the MinePath web-server concerns its advanced visualization and interactive capabilities. To our knowledge, this is the first pathway analysis server that introduces and offers visualization of the underlying and active pathway regulatory mechanisms instead of genes. Other features include live interaction, immediate visualization of functional sub-paths per phenotype and dynamic linked annotations for the engaged genes and molecular relations. The user can download not only the results but also the corresponding web viewer framework of the performed analysis. This feature provides the flexibility to immediately publish results without publishing source/expression data, and get all the functionality of a web based pathway analysis viewer. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Habitual attention in older and young adults.
Jiang, Yuhong V; Koutstaal, Wilma; Twedell, Emily L
2016-12-01
Age-related decline is pervasive in tasks that require explicit learning and memory, but such reduced function is not universally observed in tasks involving incidental learning. It is unknown if habitual attention, involving incidental probabilistic learning, is preserved in older adults. Previous research on habitual attention investigated contextual cuing in young and older adults, yet contextual cuing relies not only on spatial attention but also on context processing. Here we isolated habitual attention from context processing in young and older adults. Using a challenging visual search task in which the probability of finding targets was greater in 1 of 4 visual quadrants in all contexts, we examined the acquisition, persistence, and spatial-reference frame of habitual attention. Although older adults showed slower visual search times and steeper search slopes (more time per additional item in the search display), like young adults they rapidly acquired a strong, persistent search habit toward the high-probability quadrant. In addition, habitual attention was strongly viewer-centered in both young and older adults. The demonstration of preserved viewer-centered habitual attention in older adults suggests that it may be used to counter declines in controlled attention. This, in turn, suggests the importance, for older adults, of maintaining habit-related spatial arrangements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Visualization of GPM Standard Products at the Precipitation Processing System (PPS)
NASA Astrophysics Data System (ADS)
Kelley, O.
2010-12-01
Many of the standard data products for the Global Precipitation Measurement (GPM) constellation of satellites will be generated at and distributed by the Precipitation Processing System (PPS) at NASA Goddard. PPS will provide several means to visualize these data products. These visualization tools will be used internally by PPS analysts to investigate potential anomalies in the data files, and these tools will also be made available to researchers. Currently, a free data viewer called THOR, the Tool for High-resolution Observation Review, can be downloaded and installed on Linux, Windows, and Mac OS X systems. THOR can display swath and grid products, and to a limited degree, the low-level data packets that the satellite itself transmits to the ground system. Observations collected since the 1997 launch of the Tropical Rainfall Measuring Mission (TRMM) satellite can be downloaded from the PPS FTP archive, and in the future, many of the GPM standard products will also be available from this FTP site. To provide easy access to this 80 terabyte and growing archive, PPS currently operates an on-line ordering tool called STORM that provides geographic and time searches, browse-image display, and the ability to order user-specified subsets of standard data files. Prior to the anticipated 2013 launch of the GPM core satellite, PPS will expand its visualization tools by integrating an on-line version of THOR within STORM to provide on-the-fly image creation of any portion of an archived data file at a user-specified degree of magnification. PPS will also provide OpenDAP access to the data archive and OGC WMS image creation of both swath and gridded data products. During the GPM era, PPS will continue to provide realtime globally-gridded 3-hour rainfall estimates to the public in a compact binary format (3B42RT) and in a GIS format (2-byte TIFF images + ESRI WorldFiles).
Del Zotto, Marzia; Pegna, Alan J
2017-06-01
The dynamics of brain activation reflecting attractiveness in humans are unclear. Among the different features affecting attractiveness of the female body, the waist-to-hip ratio (WHR) is considered to be crucial. To date, however, no event-related potential (ERP) study has addressed the question of its associated pattern of brain activation. We carried out two different experiments: (a) a behavioural study, to judge the level of attractiveness of female realistic models depicting 4 different WHRs (0.6, 0.7, 0.8, 0.9) with and without clothes; (b) an EEG paradigm, to record brain activity while participants (heterosexual men and women) viewed these same models. Behavioural results showed that WHRs of 0.7 were considered more attractive than the others. ERP analyses revealed a different pattern of activation for male and female viewers. The 0.7 ratio elicited greater positivity at the P1 level in male viewers but not females. Naked bodies increased the N190 in both groups and peaked earlier for the 0.7 ratio in the male viewers. Finally, the late positive component (LPC) was found to be greater in male than in female viewers and was globally more marked for naked bodies as well as WHRs of 0.7 in both groups of viewers. These results provide the first electrophysiological evidence of specific time periods linked to the processing of a body feature denoting attractiveness and therefore playing a role in mate choice.
The rate of change of vergence-accommodation conflict affects visual discomfort.
Kim, Joohwan; Kane, David; Banks, Martin S
2014-12-01
Stereoscopic (S3D) displays create conflicts between the distance to which the eyes must converge and the distance to which the eyes must accommodate. Such conflicts require the viewer to overcome the normal coupling between vergence and accommodation, and this effort appears to cause viewer discomfort. Vergence-accommodation coupling is driven by the phasic components of the underlying control systems, and those components respond to relatively fast changes in vergence and accommodative stimuli. Given the relationship between phasic changes and vergence-accommodation coupling, we examined how the rate of change in the vergence-accommodation conflict affects viewer discomfort. We used a stereoscopic display that allows independent manipulation of the stimuli to vergence and accommodation. We presented stimuli that simulate natural viewing (i.e., vergence and accommodative stimuli changed together) and stimuli that simulate S3D viewing (i.e., vergence stimulus changes but accommodative stimulus remains fixed). The changes occurred at 0.01, 0.05, or 0.25 Hz. The lowest rate is too slow to stimulate the phasic components while the highest rate is well within the phasic range. The results were consistent with our expectation: somewhat greater discomfort was experienced when stimulus distance changed rapidly, particularly in S3D viewing when the vergence stimulus changed but the accommodative stimulus did not. These results may help in the generation of guidelines for the creation and viewing of stereo content with acceptable viewer comfort.
The Visual Arts and Qualitative Research: Diverse and Emerging Voices.
ERIC Educational Resources Information Center
Stephen, Veronica P.
The arts are basic educational processes that involve students with different abilities and from differing age groups in sensory perception. This perception, augmented by the use of art compositions, establishes a critical dialogue between the medium and the viewer. What one views, sees, and observes in an art piece serves to create a…
The collection of air measurements in real-time on moving platforms, such as wearable, bicycle-mounted, or vehicle-mounted air sensors, is becoming an increasingly common method to investigate local air quality. However, visualizing and analyzing geospatial air monitoring data re...
ERIC Educational Resources Information Center
Dianis, Gina
2008-01-01
In the project described in this article, sixth-grade students use printmaking processes to design an art image of themselves accompanied by a reflective poem. The lesson begins with a discussion of self-portraits by famous artists, inviting questions such as "What visual clues does the artist present to the viewer?" and "How has the artist placed…
Filling in the Gaps: Memory Implications for Inferring Missing Content in Graphic Narratives
ERIC Educational Resources Information Center
Magliano, Joseph P.; Kopp, Kristopher; Higgs, Karyn; Rapp, David N.
2017-01-01
Visual narratives, including graphic novels, illustrated instructions, and picture books, convey event sequences constituting a plot but cannot depict all events that make up the plot. Viewers must generate inferences that fill the gaps between explicitly shown images. This study explored the inferential products and memory implications of…
Figure 4 from Integrative Genomics Viewer: Visualizing Big Data | Office of Cancer Genomics
Gene-list view of genomic data. The gene-list view allows users to compare data across a set of loci. The data in this figure includes copy number, mutation, and clinical data from 202 glioblastoma samples from TCGA. Adapted from Figure 7; Thorvaldsdottir H et al. 2012
Figure 1 from Integrative Genomics Viewer: Visualizing Big Data | Office of Cancer Genomics
A screenshot of the IGV user interface at the chromosome view. IGV user interface showing five data types (copy number, methylation, gene expression, and loss of heterozygosity; mutations are overlaid with black boxes) from approximately 80 glioblastoma multiforme samples. Adapted from Figure S1; Robinson et al. 2011
The Rhetoric of the Frame Revisioning Archival Photographs in "The Civil War."
ERIC Educational Resources Information Center
Lancioni, Judith
1996-01-01
Illustrates the ways in which mobile framing and reframing (techniques used on the archival photographs used in the documentary film "The Civil War") constitute a visual argument. Suggests that these techniques lead viewers to analyze the photographs from the vantage point of both current and past ideologies, and proves especially…
Interactive Web-Based Pointillist Visualization of Hydrogenic Orbitals Using Jmol
ERIC Educational Resources Information Center
Tully, Shane P.; Stitt, Thomas M.; Caldwell, Robert D.; Hardock, Brian J.; Hanson, Robert M.; Maslak, Przemyslaw
2013-01-01
A Monte Carlo method is used to generate interactive pointillist displays of electron density in hydrogenic orbitals. The Web applet incorporating Jmol viewer allows for clear and accurate presentation of three-dimensional shapes and sizes of orbitals up to "n" = 5, where "n" is the principle quantum number. The obtained radial…
The Impact of Information Channel on Verbal Recall Among Preschool Aged Television Viewers.
ERIC Educational Resources Information Center
Welch, Alicia J.
A study investigated the learning impact of audio, visual, and audiovisual information channels in televised messages among preschool children. The messages consisted of a half-hour videotape of "Sesame Street" episodes (presented to 48 subjects), and a videotape of an intact "Mister Roger's Neighborhood" program (presented to…
ERIC Educational Resources Information Center
Mackert, Michael; Lazard, Allison; Guadagno, Marie; Hughes Wagner, Jessica
2014-01-01
Objective: Lack of sleep among college students negatively impacts health and academic outcomes. Building on research that implied motion imagery increases brain activity, this project tested visual design strategies to increase viewers' engagement with a health communication campaign promoting napping to improve sleep habits. Participants:…
USEPA’s ToxCast program has generated high-throughput bioactivity screening (HTS) data on thousands of chemicals. The ToxCast program has described and annotated the HTS assay battery with respect to assay design and target information (e.g., gene target). Recent stakeholder and ...
Fast neutron mutants database and web displays at SoyBase
USDA-ARS?s Scientific Manuscript database
SoyBase, the USDA-ARS soybean genetics and genomics database, has been expanded to include data for the fast neutron mutants produced by Bolon, Vance, et al. In addition to the expected text and sequence homology searches and visualization of the indels in the context of the genome sequence viewer, ...
Ego depletion in visual perception: Ego-depleted viewers experience less ambiguous figure reversal.
Wimmer, Marina C; Stirk, Steven; Hancock, Peter J B
2017-10-01
This study examined the effects of ego depletion on ambiguous figure perception. Adults (N = 315) received an ego depletion task and were subsequently tested on their inhibitory control abilities that were indexed by the Stroop task (Experiment 1) and their ability to perceive both interpretations of ambiguous figures that was indexed by reversal (Experiment 2). Ego depletion had a very small effect on reducing inhibitory control (Cohen's d = .15) (Experiment 1). Ego-depleted participants had a tendency to take longer to respond in Stroop trials. In Experiment 2, ego depletion had small to medium effects on the experience of reversal. Ego-depleted viewers tended to take longer to reverse ambiguous figures (duration to first reversal) when naïve of the ambiguity and experienced less reversal both when naïve and informed of the ambiguity. Together, findings suggest that ego depletion has small effects on inhibitory control and small to medium effects on bottom-up and top-down perceptual processes. The depletion of cognitive resources can reduce our visual perceptual experience.
Web based visualization of large climate data sets
Alder, Jay R.; Hostetler, Steven W.
2015-01-01
We have implemented the USGS National Climate Change Viewer (NCCV), which is an easy-to-use web application that displays future projections from global climate models over the United States at the state, county and watershed scales. We incorporate the NASA NEX-DCP30 statistically downscaled temperature and precipitation for 30 global climate models being used in the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC), and hydrologic variables we simulated using a simple water-balance model. Our application summarizes very large, complex data sets at scales relevant to resource managers and citizens and makes climate-change projection information accessible to users of varying skill levels. Tens of terabytes of high-resolution climate and water-balance data are distilled to compact binary format summary files that are used in the application. To alleviate slow response times under high loads, we developed a map caching technique that reduces the time it takes to generate maps by several orders of magnitude. The reduced access time scales to >500 concurrent users. We provide code examples that demonstrate key aspects of data processing, data exporting/importing and the caching technique used in the NCCV.
The Unidata Integrated Data Viewer
NASA Astrophysics Data System (ADS)
Weber, W. J.; Ho, Y.
2016-12-01
The Unidata Integrated Data Viewer (IDV) is a free and open source, virtual globe, software application that enables three dimensional viewing of earth science data. The Unidata IDV is data agnostic and can display and analyze disparate data in a single view. This capability facilitates cross discipline research and allows for multiple observation platforms to be displayed simultaneously for any given event. The Unidata IDV is a mature application, written in JAVA, and has been serving the earth science community for over 15 years. This demonstration will focus on near real time global satelliteobservations, the integration of the COSMIC radio occultation data set that profiles the atmosphere, and high resolution numerical weather prediction.
Reinforcing Visual Grouping Cues to Communicate Complex Informational Structure.
Bae, Juhee; Watson, Benjamin
2014-12-01
In his book Multimedia Learning [7], Richard Mayer asserts that viewers learn best from imagery that provides them with cues to help them organize new information into the correct knowledge structures. Designers have long been exploiting the Gestalt laws of visual grouping to deliver viewers those cues using visual hierarchy, often communicating structures much more complex than the simple organizations studied in psychological research. Unfortunately, designers are largely practical in their work, and have not paused to build a complex theory of structural communication. If we are to build a tool to help novices create effective and well structured visuals, we need a better understanding of how to create them. Our work takes a first step toward addressing this lack, studying how five of the many grouping cues (proximity, color similarity, common region, connectivity, and alignment) can be effectively combined to communicate structured text and imagery from real world examples. To measure the effectiveness of this structural communication, we applied a digital version of card sorting, a method widely used in anthropology and cognitive science to extract cognitive structures. We then used tree edit distance to measure the difference between perceived and communicated structures. Our most significant findings are: 1) with careful design, complex structure can be communicated clearly; 2) communicating complex structure is best done with multiple reinforcing grouping cues; 3) common region (use of containers such as boxes) is particularly effective at communicating structure; and 4) alignment is a weak structural communicator.
The CAVE (TM) automatic virtual environment: Characteristics and applications
NASA Technical Reports Server (NTRS)
Kenyon, Robert V.
1995-01-01
Virtual reality may best be defined as the wide-field presentation of computer-generated, multi-sensory information that tracks a user in real time. In addition to the more well-known modes of virtual reality -- head-mounted displays and boom-mounted displays -- the Electronic Visualization Laboratory at the University of Illinois at Chicago recently introduced a third mode: a room constructed from large screens on which the graphics are projected on to three walls and the floor. The CAVE is a multi-person, room sized, high resolution, 3D video and audio environment. Graphics are rear projected in stereo onto three walls and the floor, and viewed with stereo glasses. As a viewer wearing a location sensor moves within its display boundaries, the correct perspective and stereo projections of the environment are updated, and the image moves with and surrounds the viewer. The other viewers in the CAVE are like passengers in a bus, along for the ride. 'CAVE,' the name selected for the virtual reality theater, is both a recursive acronym (Cave Automatic Virtual Environment) and a reference to 'The Simile of the Cave' found in Plato's 'Republic,' in which the philosopher explores the ideas of perception, reality, and illusion. Plato used the analogy of a person facing the back of a cave alive with shadows that are his/her only basis for ideas of what real objects are. Rather than having evolved from video games or flight simulation, the CAVE has its motivation rooted in scientific visualization and the SIGGRAPH 92 Showcase effort. The CAVE was designed to be a useful tool for scientific visualization. The Showcase event was an experiment; the Showcase chair and committee advocated an environment for computational scientists to interactively present their research at a major professional conference in a one-to-many format on high-end workstations attached to large projection screens. The CAVE was developed as a 'virtual reality theater' with scientific content and projection that met the criteria of Showcase.
CAS-viewer: web-based tool for splicing-guided integrative analysis of multi-omics cancer data.
Han, Seonggyun; Kim, Dongwook; Kim, Youngjun; Choi, Kanghoon; Miller, Jason E; Kim, Dokyoon; Lee, Younghee
2018-04-20
The Cancer Genome Atlas (TCGA) project is a public resource that provides transcriptomic, DNA sequence, methylation, and clinical data for 33 cancer types. Transforming the large size and high complexity of TCGA cancer genome data into integrated knowledge can be useful to promote cancer research. Alternative splicing (AS) is a key regulatory mechanism of genes in human cancer development and in the interaction with epigenetic factors. Therefore, AS-guided integration of existing TCGA data sets will make it easier to gain insight into the genetic architecture of cancer risk and related outcomes. There are already existing tools analyzing and visualizing alternative mRNA splicing patterns for large-scale RNA-seq experiments. However, these existing web-based tools are limited to the analysis of individual TCGA data sets at a time, such as only transcriptomic information. We implemented CAS-viewer (integrative analysis of Cancer genome data based on Alternative Splicing), a web-based tool leveraging multi-cancer omics data from TCGA. It illustrates alternative mRNA splicing patterns along with methylation, miRNAs, and SNPs, and then provides an analysis tool to link differential transcript expression ratio to methylation, miRNA, and splicing regulatory elements for 33 cancer types. Moreover, one can analyze AS patterns with clinical data to identify potential transcripts associated with different survival outcome for each cancer. CAS-viewer is a web-based application for transcript isoform-driven integration of multi-omics data in multiple cancer types and will aid in the visualization and possible discovery of biomarkers for cancer by integrating multi-omics data from TCGA.
Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content
NASA Astrophysics Data System (ADS)
Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio
2013-05-01
With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.
Tactical Mission Command (TMC)
2016-03-01
capabilities to Army commanders and their staffs, consisting primarily of a user-customizable Common Operating Picture ( COP ) enabled with real-time... COP viewer and data management capability. It is a collaborative, visualization and planning application that also provides a common map display... COP ): Display the COP consisting of the following:1 Friendly forces determined by the commander including subordinate and supporting units at
What Pictures Can and Can't Do for Children's Story Understanding.
ERIC Educational Resources Information Center
Meringoff, Laurene K.
Contrasts between children's visualization and understanding of a filmed story and of a story in print are drawn in the introduction of this symposium paper. Discussion then briefly focuses on variables related to studying effects of story pictures on viewers, such as the story-line, audience characteristics, and the coordination of story modality…
Figure 2 from Integrative Genomics Viewer: Visualizing Big Data | Office of Cancer Genomics
Grouping and sorting genomic data in IGV. The IGV user interface displaying 202 glioblastoma samples from TCGA. Samples are grouped by tumor subtype (second annotation column) and data type (first annotation column) and sorted by copy number of the EGFR locus (middle column). Adapted from Figure 1; Robinson et al. 2011
Figure 5 from Integrative Genomics Viewer: Visualizing Big Data | Office of Cancer Genomics
Split-Screen View. The split-screen view is useful for exploring relationships of genomic features that are independent of chromosomal location. Color is used here to indicate mate pairs that map to different chromosomes, chromosomes 1 and 6, suggesting a translocation event. Adapted from Figure 8; Thorvaldsdottir H et al. 2012
Spatial Visualization Learning in Engineering: Traditional Methods vs. a Web-Based Tool
ERIC Educational Resources Information Center
Pedrosa, Carlos Melgosa; Barbero, Basilio Ramos; Miguel, Arturo Román
2014-01-01
This study compares an interactive learning manager for graphic engineering to develop spatial vision (ILMAGE_SV) to traditional methods. ILMAGE_SV is an asynchronous web-based learning tool that allows the manipulation of objects with a 3D viewer, self-evaluation, and continuous assessment. In addition, student learning may be monitored, which…
Image Maps in the World-Wide Web: The Uses and Limitations.
ERIC Educational Resources Information Center
Cochenour, John J.; And Others
A study of nine different image maps from World Wide Web home pages was conducted to evaluate their effectiveness in information display and access, relative to visual, navigational, and practical characteristics. Nine independent viewers completed 20-question surveys on the image maps, in which they evaluated the characteristics of the maps on a…
Visual Rhetoric and Viewer Empathy in News Photographs
ERIC Educational Resources Information Center
Rist, Mary F.
2007-01-01
Over the last two or three decades a revolution has taken place in the area of communication which forces people to rethink the social and the semiotic landscape of Western developed societies. The effect of this revolution has been to dislodge written language from the centrality which it has held, or which has been ascribed to it, in public…
LookSeq: a browser-based viewer for deep sequencing data.
Manske, Heinrich Magnus; Kwiatkowski, Dominic P
2009-11-01
Sequencing a genome to great depth can be highly informative about heterogeneity within an individual or a population. Here we address the problem of how to visualize the multiple layers of information contained in deep sequencing data. We propose an interactive AJAX-based web viewer for browsing large data sets of aligned sequence reads. By enabling seamless browsing and fast zooming, the LookSeq program assists the user to assimilate information at different levels of resolution, from an overview of a genomic region to fine details such as heterogeneity within the sample. A specific problem, particularly if the sample is heterogeneous, is how to depict information about structural variation. LookSeq provides a simple graphical representation of paired sequence reads that is more revealing about potential insertions and deletions than are conventional methods.
Kimle, P A; Fiore, A M
1992-12-01
The perceptual and affective responses of 44 women to actual illustrated and photographed fashion advertisements during focused interviews were explored. Content analysis methods identified categories of response; frequency of response categories for the two media were compared using Fisher's z tests. Significant differences in perceptual responses included greater visual interest created by the use of color in photographs, greater interest in layout and design features of the illustrations, and interest in characteristics of the models in the photographs. Affective response differences included greater preference for photographic advertisements and the garments in them. Contrary to suggestions from professionals in fashion advertising, no significant differences were found in viewers' perceptions of information about the products in the advertisements or perceptions of meaning and aesthetic response.
Visual Communication and Cognition in Everyday Decision-Making.
Jaenichen, Claudine
2017-01-01
Understanding cognition and the context of decision-making should be prioritized in the design process in order to accurately anticipate the outcome for intended audiences. A thorough understanding of cognition has been excluded from being a part of foundational design principals in visual communication. By defining leisure, direct, urgent, and emergency scenarios and providing examples of work that deeply considers the viewer's relationship to the design solution in context of these scenarios allows us to affirm the relevancy of cognition as a design variable and the importance of projects that advocate public utility.
Depicting surgical anatomy of the porta hepatis in living donor liver transplantation.
Kelly, Paul; Fung, Albert; Qu, Joy; Greig, Paul; Tait, Gordon; Jenkinson, Jodie; McGilvray, Ian; Agur, Anne
2017-01-01
Visualizing the complex anatomy of vascular and biliary structures of the liver on a case-by-case basis has been challenging. A living donor liver transplant (LDLT) right hepatectomy case, with focus on the porta hepatis, was used to demonstrate an innovative method to visualize anatomy with the purpose of refining preoperative planning and teaching of complex surgical procedures. The production of an animation-enhanced video consisted of many stages including the integration of pre-surgical planning; case-specific footage and 3D models of the liver and associated vasculature, reconstructed from contrast-enhanced CTs. Reconstructions of the biliary system were modeled from intraoperative cholangiograms. The distribution of the donor portal veins, hepatic arteries and bile ducts was defined from the porta hepatis intrahepatically to the point of surgical division. Each step of the surgery was enhanced with 3D animation to provide sequential and seamless visualization from pre-surgical planning to outcome. Use of visualization techniques such as transparency and overlays allows viewers not only to see the operative field, but also the origin and course of segmental branches and their spatial relationships. This novel educational approach enables integrating case-based operative footage with advanced editing techniques for visualizing not only the surgical procedure, but also complex anatomy such as vascular and biliary structures. The surgical team has found this approach to be beneficial for preoperative planning and clinical teaching, especially for complex cases. Each animation-enhanced video case is posted to the open-access Toronto Video Atlas of Surgery (TVASurg), an education resource with a global clinical and patient user base. The novel educational system described in this paper enables integrating operative footage with 3D animation and cinematic editing techniques for seamless sequential organization from pre-surgical planning to outcome.
Depicting surgical anatomy of the porta hepatis in living donor liver transplantation
Fung, Albert; Qu, Joy; Greig, Paul; Tait, Gordon; Jenkinson, Jodie; McGilvray, Ian; Agur, Anne
2017-01-01
Visualizing the complex anatomy of vascular and biliary structures of the liver on a case-by-case basis has been challenging. A living donor liver transplant (LDLT) right hepatectomy case, with focus on the porta hepatis, was used to demonstrate an innovative method to visualize anatomy with the purpose of refining preoperative planning and teaching of complex surgical procedures. The production of an animation-enhanced video consisted of many stages including the integration of pre-surgical planning; case-specific footage and 3D models of the liver and associated vasculature, reconstructed from contrast-enhanced CTs. Reconstructions of the biliary system were modeled from intraoperative cholangiograms. The distribution of the donor portal veins, hepatic arteries and bile ducts was defined from the porta hepatis intrahepatically to the point of surgical division. Each step of the surgery was enhanced with 3D animation to provide sequential and seamless visualization from pre-surgical planning to outcome. Use of visualization techniques such as transparency and overlays allows viewers not only to see the operative field, but also the origin and course of segmental branches and their spatial relationships. This novel educational approach enables integrating case-based operative footage with advanced editing techniques for visualizing not only the surgical procedure, but also complex anatomy such as vascular and biliary structures. The surgical team has found this approach to be beneficial for preoperative planning and clinical teaching, especially for complex cases. Each animation-enhanced video case is posted to the open-access Toronto Video Atlas of Surgery (TVASurg), an education resource with a global clinical and patient user base. The novel educational system described in this paper enables integrating operative footage with 3D animation and cinematic editing techniques for seamless sequential organization from pre-surgical planning to outcome. PMID:29078606
Global Visualization (GloVis) Viewer
,
2005-01-01
GloVis (http://glovis.usgs.gov) is a browse image-based search and order tool that can be used to quickly review the land remote sensing data inventories held at the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS). GloVis was funded by the AmericaView project to reduce the difficulty of identifying and acquiring data for user-defined study areas. Updated daily with the most recent satellite acquisitions, GloVis displays data in a mosaic, allowing users to select any area of interest worldwide and immediately view all available browse images for the following Landsat data sets: Multispectral Scanner (MSS), Multi-Resolution Land Characteristics (MRLC), Orthorectified, Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and ETM+ Scan Line Corrector-off (SLC-off). Other data sets include Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Moderate Resolution Imaging Spectroradiometer (MODIS), Aqua MODIS, and the Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion data.
The RCSB protein data bank: integrative view of protein, gene and 3D structural information
Rose, Peter W.; Prlić, Andreas; Altunkaya, Ali; Bi, Chunxiao; Bradley, Anthony R.; Christie, Cole H.; Costanzo, Luigi Di; Duarte, Jose M.; Dutta, Shuchismita; Feng, Zukang; Green, Rachel Kramer; Goodsell, David S.; Hudson, Brian; Kalro, Tara; Lowe, Robert; Peisach, Ezra; Randle, Christopher; Rose, Alexander S.; Shao, Chenghua; Tao, Yi-Ping; Valasatava, Yana; Voigt, Maria; Westbrook, John D.; Woo, Jesse; Yang, Huangwang; Young, Jasmine Y.; Zardecki, Christine; Berman, Helen M.; Burley, Stephen K.
2017-01-01
The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB, http://rcsb.org), the US data center for the global PDB archive, makes PDB data freely available to all users, from structural biologists to computational biologists and beyond. New tools and resources have been added to the RCSB PDB web portal in support of a ‘Structural View of Biology.’ Recent developments have improved the User experience, including the high-speed NGL Viewer that provides 3D molecular visualization in any web browser, improved support for data file download and enhanced organization of website pages for query, reporting and individual structure exploration. Structure validation information is now visible for all archival entries. PDB data have been integrated with external biological resources, including chromosomal position within the human genome; protein modifications; and metabolic pathways. PDB-101 educational materials have been reorganized into a searchable website and expanded to include new features such as the Geis Digital Archive. PMID:27794042
DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool
Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary
2008-01-01
Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444
Neurons in the monkey amygdala detect eye-contact during naturalistic social interactions
Mosher, Clayton P.; Zimmerman, Prisca E.; Gothard, Katalin M.
2014-01-01
Summary Primates explore the visual world through eye-movement sequences. Saccades bring details of interest into the fovea while fixations stabilize the image [1]. During natural vision, social primates direct their gaze at the eyes of others to communicate their own emotions and intentions and to gather information about the mental states of others [2]. Direct gaze is an integral part of facial expressions that signals cooperation or conflict over resources and social status [3-6]. Despite the great importance of making and breaking eye contact in the behavioral repertoire of primates, little is known about the neural substrates that support these behaviors. Here we show that the monkey amygdala contains neurons that respond selectively to fixations at the eyes of others and to eye contact. These “eye cells” share several features with the canonical, visually responsive neurons in the monkey amygdala, however, they respond to the eyes only when they fall within the fovea of the viewer, either as a result of a deliberate saccade, or as eyes move into the fovea of the viewer during a fixation intended to explore a different feature. The presence of eyes in peripheral vision fails to activate the eye cells. These findings link the primate amygdala to eye-movements involved in the exploration and selection of details in visual scenes that contain socially and emotionally salient features. PMID:25283782
Neurons in the monkey amygdala detect eye contact during naturalistic social interactions.
Mosher, Clayton P; Zimmerman, Prisca E; Gothard, Katalin M
2014-10-20
Primates explore the visual world through eye-movement sequences. Saccades bring details of interest into the fovea, while fixations stabilize the image. During natural vision, social primates direct their gaze at the eyes of others to communicate their own emotions and intentions and to gather information about the mental states of others. Direct gaze is an integral part of facial expressions that signals cooperation or conflict over resources and social status. Despite the great importance of making and breaking eye contact in the behavioral repertoire of primates, little is known about the neural substrates that support these behaviors. Here we show that the monkey amygdala contains neurons that respond selectively to fixations on the eyes of others and to eye contact. These "eye cells" share several features with the canonical, visually responsive neurons in the monkey amygdala; however, they respond to the eyes only when they fall within the fovea of the viewer, either as a result of a deliberate saccade or as eyes move into the fovea of the viewer during a fixation intended to explore a different feature. The presence of eyes in peripheral vision fails to activate the eye cells. These findings link the primate amygdala to eye movements involved in the exploration and selection of details in visual scenes that contain socially and emotionally salient features. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Critical Review on the Use of Support Values in Tree Viewers and Bioinformatics Toolkits.
Czech, Lucas; Huerta-Cepas, Jaime; Stamatakis, Alexandros
2017-06-01
Phylogenetic trees are routinely visualized to present and interpret the evolutionary relationships of species. Most empirical evolutionary data studies contain a visualization of the inferred tree with branch support values. Ambiguous semantics in tree file formats can lead to erroneous tree visualizations and therefore to incorrect interpretations of phylogenetic analyses. Here, we discuss problems that arise when displaying branch values on trees after rerooting. Branch values are typically stored as node labels in the widely-used Newick tree format. However, such values are attributes of branches. Storing them as node labels can therefore yield errors when rerooting trees. This depends on the mostly implicit semantics that tools deploy to interpret node labels. We reviewed ten tree viewers and ten bioinformatics toolkits that can display and reroot trees. We found that 14 out of 20 of these tools do not permit users to select the semantics of node labels. Thus, unaware users might obtain incorrect results when rooting trees. We illustrate such incorrect mappings for several test cases and real examples taken from the literature. This review has already led to improvements in eight tools. We suggest tools should provide options that explicitly force users to define the semantics of node labels. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
The Rate of Change of Vergence-Accommodation Conflict Affects Visual Discomfort
Kane, David; Banks, Martin S.
2014-01-01
Stereoscopic (S3D) displays create conflicts between the distance to which the eyes must converge and the distance to which the eyes must accommodate. Such conflicts require the viewer to overcome the normal coupling between vergence and accommodation, and this effort appears to cause viewer discomfort. Vergence-accommodation coupling is driven by the phasic components of the underlying control systems, and those components respond to relatively fast changes in vergence and accommodative stimuli. Given the relationship between phasic changes and vergence-accommodation coupling, we examined how the rate of change in the vergence-accommodation conflict affects viewer discomfort. We used a stereoscopic display that allows independent manipulation of the stimuli to vergence and accommodation. We presented stimuli that simulate natural viewing (i.e., vergence and accommodative stimuli changed together) and stimuli that simulate S3D viewing (i.e., vergence stimulus changes but accommodative stimulus remains fixed). The changes occurred at 0.01, 0.05, or 0.25Hz. The lowest rate is too slow to stimulate the phasic components while the highest rate is well within the phasic range. The results were consistent with our expectation: somewhat greater discomfort was experienced when stimulus distance changed rapidly, particularly in S3D viewing when the vergence stimulus changed but the accommodative stimulus did not. These results may help in the generation of guidelines for the creation and viewing of stereo content with acceptable viewer comfort. PMID:25448713
Scaling Quelccaya: Using 3-D Animation and Satellite Data To Visualize Climate Change
NASA Astrophysics Data System (ADS)
Malone, A.; Leich, M.
2017-12-01
The near-global glacier retreat of recent decades is among the most convincing evidence for contemporary climate change. The epicenter of this action, however, is often far from population-dense centers. How can a glacier's scale, both physical and temporal, be communicated to those faraway? This project, an artists-scientist collaboration, proposes an alternate system for presenting climate change data, designed to evoke a more visceral response through a visual, geospatial, poetic approach. Focusing on the Quelccaya Ice Cap, the world's largest tropical glaciated area located in the Peruvian Andes, we integrate 30 years of satellite imagery and elevation models with 3D animation and gaming software, to bring it into a virtual juxtaposition with a model of the city of Chicago. Using Chicago as a cosmopolitan North American "measuring stick," we apply glaciological models to determine, for instance, the amount of ice that has melted on Quelccaya over the last 30 years and what the height of an equivalent amount of snow would fall on the city of Chicago (circa 600 feet, higher than the Willis Tower). Placing the two sites in a framework of intimate scale, we present a more imaginative and psychologically-astute manner of portraying the sober facts of climate change, by inviting viewers to learn and consider without inducing fear.
[The use of an opect optic system in neurosurgical practice].
Kalinovskiy, A V; Rzaev, D A; Yoshimitsu, K
2018-01-01
Modern neurosurgical practice is impossible without access to various information sources. The use of MRI and MSCT data during surgery is an integral part of the neurosurgeon's daily practice. Devices capable of managing an image viewer system without direct contact with equipment simplify working in the operating room. To test operation of a non-contact MRI and MSCT image viewer system in the operating room and to evaluate the system effectiveness. An Opect non-contact image management system developed at the Tokyo Women's Medical University was installed in one of the operating rooms of the Novosibirsk Federal Center of Neurosurgery in 2014. In 2015, the Opect system was used by operating surgeons in 73 surgeries performed in the same operating room. The system effectiveness was analyzed based on a survey of surgeons. The non-contact image viewer system occurred to be easy-to-learn for the personnel to operate this system, easy-to-manage it, and easy-to-present visual information during surgery. Application of the Opect system simplifies work with neuroimaging data during surgery. The surgeon can independently view series of relevant MRI and MSCT scans without any assistance.
Now you see me, now you don't: iridescence increases the efficacy of lizard chromatic signals
NASA Astrophysics Data System (ADS)
Pérez i de Lanuza, Guillem; Font, Enrique
2014-10-01
The selective forces imposed by primary receivers and unintended eavesdroppers of animal signals often act in opposite directions, constraining the development of conspicuous coloration. Because iridescent colours change their chromatic properties with viewer angle, iridescence offers a potential mechanism to relax this trade-off when the relevant observers involved in the evolution of signal design adopt different viewer geometries. We used reflectance spectrophotometry and visual modelling to test if the striking blue head coloration of males of the lizard Lacerta schreibeiri (1) is iridescent and (2) is more conspicuous when viewed from the perspective of conspecifics than from that of the main predators of adult L. schreibeiri (raptors). We demonstrate that the blue heads of L. schreiberi show angle-dependent changes in their chromatic properties. This variation allows the blue heads to be relatively conspicuous to conspecific viewers located in the same horizontal plane as the sender, while simultaneously being relatively cryptic to birds that see it from above. This study is the first to suggest the use of angle-dependent chromatic signals in lizards, and provides the first evidence of the adaptive function of iridescent coloration based on its detectability to different observers.
Peterka, Tom; Kooima, Robert L; Sandin, Daniel J; Johnson, Andrew; Leigh, Jason; DeFanti, Thomas A
2008-01-01
A solid-state dynamic parallax barrier autostereoscopic display mitigates some of the restrictions present in static barrier systems, such as fixed view-distance range, slow response to head movements, and fixed stereo operating mode. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system, and the display can switch between 3D and 2D modes by disabling the barrier on a per-pixel basis. Moreover, Dynallax can output four independent eye channels when two viewers are present, and both head-tracked viewers receive an independent pair of left-eye and right-eye perspective views based on their position in 3D space. The display device is constructed by using a dual-stacked LCD monitor where a dynamic barrier is rendered on the front display and a modulated virtual environment composed of two or four channels is rendered on the rear display. Dynallax was recently demonstrated in a small-scale head-tracked prototype system. This paper summarizes the concepts presented earlier, extends the discussion of various topics, and presents recent improvements to the system.
NASA Astrophysics Data System (ADS)
Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.
2017-10-01
ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks. Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.
Kozhevnikov, Maria; Dhond, Rupali P.
2012-01-01
Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive (3DI) virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard and Metzler (1971) mental rotation (MR) task across the following three types of visual presentation environments; traditional 2D non-immersive (2DNI), 3D non-immersive (3DNI – anaglyphic glasses), and 3DI (head mounted display with position and head orientation tracking). In Experiment 2, we examined how the use of different backgrounds affected MR processes within the 3DI environment. In Experiment 3, we compared electroencephalogram data recorded while participants were mentally rotating visual-spatial images presented in 3DI vs. 2DNI environments. Overall, the findings of the three experiments suggest that visual-spatial processing is different in immersive and non-immersive environments, and that immersive environments may require different image encoding and transformation strategies than the two other non-immersive environments. Specifically, in a non-immersive environment, participants may utilize a scene-based frame of reference and allocentric encoding whereas immersive environments may encourage the use of a viewer-centered frame of reference and egocentric encoding. These findings also suggest that MR performed in laboratory conditions using a traditional 2D computer screen may not reflect spatial processing as it would occur in the real world. PMID:22908003
Visualizing speciation in artificial cichlid fish.
Clement, Ross
2006-01-01
The Cichlid Speciation Project (CSP) is an ALife simulation system for investigating open problems in the speciation of African cichlid fish. The CSP can be used to perform a wide range of experiments that show that speciation is a natural consequence of certain biological systems. A visualization system capable of extracting the history of speciation from low-level trace data and creating a phylogenetic tree has been implemented. Unlike previous approaches, this visualization system presents a concrete trace of speciation, rather than a summary of low-level information from which the viewer can make subjective decisions on how speciation progressed. The phylogenetic trees are a more objective visualization of speciation, and enable automated collection and summarization of the results of experiments. The visualization system is used to create a phylogenetic tree from an experiment that models sympatric speciation.
For Whom the Bell Tolls and the Birth of the New Auteur Movement.
ERIC Educational Resources Information Center
Kerns, H. Dan
The state of the motion picture industry is reviewed, focusing on needed change in the practice of product placement. The study of the placements of advertising in films should be of interest to the student of visual literacy. Product placers are using films to advertise their products to entertainment seekers. The viewer, often a child, may not…
Xie, Yang; Ying, Jinyong; Xie, Dexuan
2017-03-30
SMPBS (Size Modified Poisson-Boltzmann Solvers) is a web server for computing biomolecular electrostatics using finite element solvers of the size modified Poisson-Boltzmann equation (SMPBE). SMPBE not only reflects ionic size effects but also includes the classic Poisson-Boltzmann equation (PBE) as a special case. Thus, its web server is expected to have a broader range of applications than a PBE web server. SMPBS is designed with a dynamic, mobile-friendly user interface, and features easily accessible help text, asynchronous data submission, and an interactive, hardware-accelerated molecular visualization viewer based on the 3Dmol.js library. In particular, the viewer allows computed electrostatics to be directly mapped onto an irregular triangular mesh of a molecular surface. Due to this functionality and the fast SMPBE finite element solvers, the web server is very efficient in the calculation and visualization of electrostatics. In addition, SMPBE is reconstructed using a new objective electrostatic free energy, clearly showing that the electrostatics and ionic concentrations predicted by SMPBE are optimal in the sense of minimizing the objective electrostatic free energy. SMPBS is available at the URL: smpbs.math.uwm.edu © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Design features of graphs in health risk communication: a systematic review.
Ancker, Jessica S; Senathirajah, Yalini; Kukafka, Rita; Starren, Justin B
2006-01-01
This review describes recent experimental and focus group research on graphics as a method of communication about quantitative health risks. Some of the studies discussed in this review assessed effect of graphs on quantitative reasoning, others assessed effects on behavior or behavioral intentions, and still others assessed viewers' likes and dislikes. Graphical features that improve the accuracy of quantitative reasoning appear to differ from the features most likely to alter behavior or intentions. For example, graphs that make part-to-whole relationships available visually may help people attend to the relationship between the numerator (the number of people affected by a hazard) and the denominator (the entire population at risk), whereas graphs that show only the numerator appear to inflate the perceived risk and may induce risk-averse behavior. Viewers often preferred design features such as visual simplicity and familiarity that were not associated with accurate quantitative judgments. Communicators should not assume that all graphics are more intuitive than text; many of the studies found that patients' interpretations of the graphics were dependent upon expertise or instruction. Potentially useful directions for continuing research include interactions with educational level and numeracy and successful ways to communicate uncertainty about risk.
Johnson, Elizabeth K; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Rosenstiel, Stephen F
2017-02-01
Previous eye-tracking research has demonstrated that laypersons view the range of dental attractiveness levels differently depending on facial attractiveness levels. How the borderline levels of dental attractiveness are viewed has not been evaluated in the context of facial attractiveness and compared with those with near-ideal esthetics or those in definite need of orthodontic treatment according to the Aesthetic Component of the Index of Orthodontic Treatment Need scale. Our objective was to determine the level of viewers' visual attention in its treatment need categories levels 3 to 7 for persons considered "attractive," "average," or "unattractive." Facial images of persons at 3 facial attractiveness levels were combined with 5 levels of dental attractiveness (dentitions representing Aesthetic Component of the Index of Orthodontic Treatment Need levels 3-7) using imaging software to form 15 composite images. Each image was viewed twice by 66 lay participants using eye tracking. Both the fixation density (number of fixations per facial area) and the fixation duration (length of time for each facial area) were quantified for each image viewed. Repeated-measures analysis of variance was used to determine how fixation density and duration varied among the 6 facial interest areas (chin, ear, eye, mouth, nose, and other). Viewers demonstrated excellent to good reliability among the 6 interest areas (intraviewer reliability, 0.70-0.96; interviewer reliability, 0.56-0.93). Between Aesthetic Component of the Index of Orthodontic Treatment Need levels 3 and 7, viewers of all facial attractiveness levels showed an increase in attention to the mouth. However, only with the attractive models were significant differences in fixation density and duration found between borderline levels with female viewers. Female viewers paid attention to different areas of the face than did male viewers. The importance of dental attractiveness is amplified in facially attractive female models compared with average and unattractive female models between near-ideal and borderline-severe dentally unattractive levels. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Global Precipitation Measurement (GPM) Mission
2014-02-21
A sign at an overlook, named Rocket Hill, helps viewers identify the various facilities of the Tanegashima Space Center (TNSC), including launch pad 1 that will be used Feb. 28, 2014 for the launch of an H-IIA rocket carrying the Global Precipitation Measurement (GPM) Core Observatory, Friday, Feb. 21, 2014, Tanegashima Island, Japan. The NASA-Japan Aerospace Exploration Agency (JAXA) GPM spacecraft will collect information that unifies data from an international network of existing and future satellites to map global rainfall and snowfall every three hours. Photo Credit: (NASA/Bill Ingalls)
Lightness modification of color image for protanopia and deuteranopia
NASA Astrophysics Data System (ADS)
Tanaka, Go; Suetake, Noriaki; Uchino, Eiji
2010-01-01
In multimedia content, colors play important roles in conveying visual information. However, color information cannot always be perceived uniformly by all people. People with a color vision deficiency, such as dichromacy, cannot recognize and distinguish certain color combinations. In this paper, an effective lightness modification method, which enables barrier-free color vision for people with dichromacy, especially protanopia or deuteranopia, while preserving the color information in the original image for people with standard color vision, is proposed. In the proposed method, an optimization problem concerning lightness components is first defined by considering color differences in an input image. Then a perceptible and comprehensible color image for both protanopes and viewers with no color vision deficiency or both deuteranopes and viewers with no color vision deficiency is obtained by solving the optimization problem. Through experiments, the effectiveness of the proposed method is illustrated.
Klee, Kathrin; Ernst, Rebecca; Spannagl, Manuel; Mayer, Klaus F X
2007-08-30
Apollo, a genome annotation viewer and editor, has become a widely used genome annotation and visualization tool for distributed genome annotation projects. When using Apollo for annotation, database updates are carried out by uploading intermediate annotation files into the respective database. This non-direct database upload is laborious and evokes problems of data synchronicity. To overcome these limitations we extended the Apollo data adapter with a generic, configurable web service client that is able to retrieve annotation data in a GAME-XML-formatted string and pass it on to Apollo's internal input routine. This Apollo web service adapter, Apollo2Go, simplifies the data exchange in distributed projects and aims to render the annotation process more comfortable. The Apollo2Go software is freely available from ftp://ftpmips.gsf.de/plants/apollo_webservice.
Klee, Kathrin; Ernst, Rebecca; Spannagl, Manuel; Mayer, Klaus FX
2007-01-01
Background Apollo, a genome annotation viewer and editor, has become a widely used genome annotation and visualization tool for distributed genome annotation projects. When using Apollo for annotation, database updates are carried out by uploading intermediate annotation files into the respective database. This non-direct database upload is laborious and evokes problems of data synchronicity. Results To overcome these limitations we extended the Apollo data adapter with a generic, configurable web service client that is able to retrieve annotation data in a GAME-XML-formatted string and pass it on to Apollo's internal input routine. Conclusion This Apollo web service adapter, Apollo2Go, simplifies the data exchange in distributed projects and aims to render the annotation process more comfortable. The Apollo2Go software is freely available from . PMID:17760972
NASA Astrophysics Data System (ADS)
Mayhew, Christopher A.; Mayhew, Craig M.
2009-02-01
Vision III Imaging, Inc. (the Company) has developed Parallax Image Display (PIDTM) software tools to critically align and display aerial images with parallax differences. Terrain features are rendered obvious to the viewer when critically aligned images are presented alternately at 4.3 Hz. The recent inclusion of digital elevation models in geographic data browsers now allows true three-dimensional parallax to be acquired from virtual globe programs like Google Earth. The authors have successfully developed PID methods and code that allow three-dimensional geographical terrain data to be visualized using temporal parallax differences.
Modeling Color Difference for Visualization Design.
Szafir, Danielle Albers
2018-01-01
Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.
Remote Visualization and Remote Collaboration On Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
A new technology has been developed for remote visualization that provides remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as fluid dynamics simulations or measurements). Based on this technology, some World Wide Web sites on the Internet are providing fluid dynamics data for educational or testing purposes. This technology is also being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics and wind tunnel testing. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit).
Social Water Science Data: Dimensions, Data Management, and Visualization
NASA Astrophysics Data System (ADS)
Jones, A. S.; Horsburgh, J. S.; Flint, C.; Jackson-Smith, D.
2016-12-01
Water systems are increasingly conceptualized as coupled human-natural systems, with growing emphasis on representing the human element in hydrology. However, social science data and associated considerations may be unfamiliar and intimidating to many hydrologic researchers. Monitoring social aspects of water systems involves expanding the range of data types typically used in hydrology and appreciating nuances in datasets that are well known to social scientists, but less understood by hydrologists. We define social water science data as any information representing the human aspects of a water system. We present a scheme for classifying these data, highlight an array of data types, and illustrate data management considerations and challenges unique to social science data. This classification scheme was applied to datasets generated as part of iUTAH (innovative Urban Transitions and Arid region Hydro-sustainability), an interdisciplinary water research project based in Utah, USA that seeks to integrate and share social and biophysical water science data. As the project deployed cyberinfrastructure for baseline biophysical data, cyberinfrastructure for analogous social science data was necessary. As a particular case of social water science data, we focus in this presentation on social science survey data. These data are often interpreted through the lens of the original researcher and are typically presented to interested parties in static figures or reports. To provide more exploratory and dynamic communication of these data beyond the individual or team who collected the data, we developed a web-based, interactive viewer to visualize social science survey responses. This interface is applicable for examining survey results that show human motivations and actions related to environmental systems and as a useful tool for participatory decision-making. It also serves as an example of how new data sharing and visualization tools can be developed once the classification and characteristics of social water science data are well understood. We demonstrate the survey data viewer implemented to explore water-related survey data collected as part of the iUTAH project. The Viewer uses a standardized template for encoding survey data and metadata, making it generalizable and reusable for similar surveys.
Visual Display of 5p-arm and 3p-arm miRNA Expression with a Mobile Application.
Pan, Chao-Yu; Kuo, Wei-Ting; Chiu, Chien-Yuan; Lin, Wen-Chang
2017-01-01
MicroRNAs (miRNAs) play important roles in human cancers. In previous studies, we have demonstrated that both 5p-arm and 3p-arm of mature miRNAs could be expressed from the same precursor and we further interrogated the 5p-arm and 3p-arm miRNA expression with a comprehensive arm feature annotation list. To assist biologists to visualize the differential 5p-arm and 3p-arm miRNA expression patterns, we utilized a user-friendly mobile App to display. The Cancer Genome Atlas (TCGA) miRNA-Seq expression information. We have collected over 4,500 miRNA-Seq datasets from 15 TCGA cancer types and further processed them with the 5p-arm and 3p-arm annotation analysis pipeline. In order to be displayed with the RNA-Seq Viewer App, annotated 5p-arm and 3p-arm miRNA expression information and miRNA gene loci information were converted into SQLite tables. In this distinct application, for any given miRNA gene, 5p-arm miRNA is illustrated on the top of chromosome ideogram and 3p-arm miRNA is illustrated on the bottom of chromosome ideogram. Users can then easily interrogate the differentially 5p-arm/3p-arm expressed miRNAs with their mobile devices. This study demonstrates the feasibility and utility of RNA-Seq Viewer App in addition to mRNA-Seq data visualization.
The predictive mind and the experience of visual art work
Kesner, Ladislav
2014-01-01
Among the main challenges of the predictive brain/mind concept is how to link prediction at the neural level to prediction at the cognitive-psychological level and finding conceptually robust and empirically verifiable ways to harness this theoretical framework toward explaining higher-order mental and cognitive phenomena, including the subjective experience of aesthetic and symbolic forms. Building on the tentative prediction error account of visual art, this article extends the application of the predictive coding framework to the visual arts. It does so by linking this theoretical discussion to a subjective, phenomenological account of how a work of art is experienced. In order to engage more deeply with a work of art, viewers must be able to tune or adapt their prediction mechanism to recognize art as a specific class of objects whose ontological nature defies predictability, and they must be able to sustain a productive flow of predictions from low-level sensory, recognitional to abstract semantic, conceptual, and affective inferences. The affective component of the process of predictive error optimization that occurs when a viewer enters into dialog with a painting is constituted both by activating the affective affordances within the image and by the affective consequences of prediction error minimization itself. The predictive coding framework also has implications for the problem of the culturality of vision. A person’s mindset, which determines what top–down expectations and predictions are generated, is co-constituted by culture-relative skills and knowledge, which form hyperpriors that operate in the perception of art. PMID:25566111
The predictive mind and the experience of visual art work.
Kesner, Ladislav
2014-01-01
Among the main challenges of the predictive brain/mind concept is how to link prediction at the neural level to prediction at the cognitive-psychological level and finding conceptually robust and empirically verifiable ways to harness this theoretical framework toward explaining higher-order mental and cognitive phenomena, including the subjective experience of aesthetic and symbolic forms. Building on the tentative prediction error account of visual art, this article extends the application of the predictive coding framework to the visual arts. It does so by linking this theoretical discussion to a subjective, phenomenological account of how a work of art is experienced. In order to engage more deeply with a work of art, viewers must be able to tune or adapt their prediction mechanism to recognize art as a specific class of objects whose ontological nature defies predictability, and they must be able to sustain a productive flow of predictions from low-level sensory, recognitional to abstract semantic, conceptual, and affective inferences. The affective component of the process of predictive error optimization that occurs when a viewer enters into dialog with a painting is constituted both by activating the affective affordances within the image and by the affective consequences of prediction error minimization itself. The predictive coding framework also has implications for the problem of the culturality of vision. A person's mindset, which determines what top-down expectations and predictions are generated, is co-constituted by culture-relative skills and knowledge, which form hyperpriors that operate in the perception of art.
Vergence-accommodation conflicts hinder visual performance and cause visual fatigue.
Hoffman, David M; Girshick, Ahna R; Akeley, Kurt; Banks, Martin S
2008-03-28
Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the real scenes the displays depict. A likely cause of such distortions is the fact that computer displays present images on one surface. Thus, focus cues-accommodation and blur in the retinal image-specify the depth of the display rather than the depths in the depicted scene. Additionally, the uncoupling of vergence and accommodation required by 3D displays frequently reduces one's ability to fuse the binocular stimulus and causes discomfort and fatigue for the viewer. We have developed a novel 3D display that presents focus cues that are correct or nearly correct for the depicted scene. We used this display to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue. We show that when focus cues are correct or nearly correct, (1) the time required to identify a stereoscopic stimulus is reduced, (2) stereoacuity in a time-limited task is increased, (3) distortions in perceived depth are reduced, and (4) viewer fatigue and discomfort are reduced. We discuss the implications of this work for vision research and the design and use of displays.
Remotely Sensed Imagery from USGS: Update on Products and Portals
NASA Astrophysics Data System (ADS)
Lamb, R.; Lemig, K.
2016-12-01
The USGS Earth Resources Observation and Science (EROS) Center has recently implemented a number of additions and changes to its existing suite of products and user access systems. Together, these changes will enhance the accessibility, breadth, and usability of the remotely sensed image products and delivery mechanisms available from USGS. As of late 2016, several new image products are now available for public download at no charge from USGS/EROS Center. These new products include: (1) global Level 1T (precision terrain-corrected) products from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), provided via NASA's Land Processes Distributed Active Archive Center (LP DAAC); and (2) Sentinel-2 Multispectral Instrument (MSI) products, available through a collaborative effort with the European Space Agency (ESA). Other new products are also planned to become available soon. In an effort to enable future scientific analysis of the full 40+ year Landsat archive, the USGS also introduced a new "Collection Management" strategy for all Landsat Level 1 products. This new archive and access schema involves quality-based tier designations that will support future time series analysis of the historic Landsat archive at the pixel level. Along with the quality tier designations, the USGS has also implemented a number of other Level 1 product improvements to support Landsat science applications, including: enhanced metadata, improved geometric processing, refined quality assessment information, and angle coefficient files. The full USGS Landsat archive is now being reprocessed in accordance with the new `Collection 1' specifications. Several USGS data access and visualization systems have also seen major upgrades. These user interfaces include a new version of the USGS LandsatLook Viewer which was released in Fall 2017 to provide enhanced functionality and Sentinel-2 visualization and access support. A beta release of the USGS Global Visualization Tool ("GloVis Next") was also released in Fall 2017, with many new features including data visualization at full resolution. The USGS also introduced a time-enabled web mapping service (WMS) to support time-based access to the existing LandsatLook "natural color" full-resolution browse image services.
NASA Astrophysics Data System (ADS)
Kelley, Owen A.
2013-02-01
THOR, the Tool for High-resolution Observation Review, is a data viewer for the Tropical Rainfall Measuring Mission (TRMM) and the upcoming Global Precipitation Measurement (GPM) mission. THOR began as a desktop application, but now it can be accessed with a web browser, making THOR one of the first online tools for visualizing TRMM satellite data (http://pps.gsfc.nasa.gov/thor). In this effort, the reuse of the existing visualization code was maximized and the complexity of new code was minimized by avoiding unnecessary functionality, frameworks, or libraries. The simplicity of this approach makes it potentially attractive to researchers wishing to adapt their visualization applications for online deployment. To enable THOR to run within a web browser, three new pieces of code are written. First, the graphical user interface (GUI) of the desktop application is translated into HTML, JavaScript, and CSS. Second, a simple communication mechanism is developed over HTTP. Third, a virtual GUI is created on the server that interfaces with the image-generating routines of the existing desktop application so that these routines do not need to be modified for online use. While the basic functionality of THOR is now available online, prototyping is ongoing for enhanced 3D imaging and other aspects of both THOR Desktop and THOR Online. Because TRMM data products are complex and periodically reprocessed with improved algorithms, having a tool such as THOR is important to analysts at the Precipitation Processing System where the algorithms are tested and the products generated, stored, and distributed. Researchers also have found THOR useful for taking a first look at individual files before writing their own software to perform specialized calculations and analyses.
NASA Astrophysics Data System (ADS)
Ansari, S.; Del Greco, S.
2006-12-01
In February 2005, 61 countries around the World agreed on a 10 year plan to work towards building open systems for sharing geospatial data and services across different platforms worldwide. This system is known as the Global Earth Observation System of Systems (GEOSS). The objective of GEOSS focuses on easy access to environmental data and interoperability across different systems allowing participating countries to measure the "pulse" of the planet in an effort to advance society. In support of GEOSS goals, NOAA's National Climatic Data Center (NCDC) has developed radar visualization and data exporter tools in an open systems environment. The NCDC Weather Radar Toolkit (WRT) loads Weather Surveillance Radar 1988 Doppler (WSR-88D) volume scan (S-band) data, known as Level-II, and derived products, known as Level-III, into an Open Geospatial Consortium (OGC) compliant environment. The application is written entirely in Java and will run on any Java- supported platform including Windows, Macintosh and Linux/Unix. The application is launched via Java Web Start and runs on the client machine while accessing these data locally or remotely from the NCDC archive, NOAA FTP server or any URL or THREDDS Data Server. The WRT allows the data to be manipulated to create custom mosaics, composites and precipitation estimates. The WRT Viewer provides tools for custom data overlays, Web Map Service backgrounds, animations and basic filtering. The export of images and movies is provided in multiple formats. The WRT Data Exporter allows for data export in both vector polygon (Shapefile, Well-Known Text) and raster (GeoTIFF, ESRI Grid, VTK, NetCDF, GrADS) formats. By decoding the various Radar formats into the NetCDF Common Data Model, the exported NetCDF data becomes interoperable with existing software packages including THREDDS Data Server and the Integrated Data Viewer (IDV). The NCDC recently partnered with NOAA's National Severe Storms Lab (NSSL) to decode Sigmet C-band Doppler radar data providing the NCDC Viewer/Data Exporter the functionality to read C-Band. This also supports a bilateral agreement between the United States and Canada for data sharing and to support interoperability with the US WSR-88D and Environment Canada radar networks. In addition, the NCDC partnered with the University of Oklahoma to develop decoders to read a test bed of distributed X- band radars that are funded through the Collaborative Adaptive Sensing of the Atmosphere (CASA) project. The NCDC is also archiving the National Mosaic and Next Generation QPE (Q2) products from NSSL, which provide products such as three-dimensional reflectivity, composite reflectivity and precipitation estimates at a 1 km resolution. These three sources of Radar data are also supported in the WRT.
Johnsen, Sönke; Widder, Edith A; Mobley, Curtis D
2004-08-01
Many deep-sea species, particularly crustaceans, cephalopods, and fish, use photophores to illuminate their ventral surfaces and thus disguise their silhouettes from predators viewing them from below. This strategy has several potential limitations, two of which are examined here. First, a predator with acute vision may be able to detect the individual photophores on the ventral surface. Second, a predator may be able to detect any mismatch between the spectrum of the bioluminescence and that of the background light. The first limitation was examined by modeling the perceived images of the counterillumination of the squid Abralia veranyi and the myctophid fish Ceratoscopelus maderensis as a function of the distance and visual acuity of the viewer. The second limitation was addressed by measuring downwelling irradiance under moonlight and starlight and then modeling underwater spectra. Four water types were examined: coastal water at a depth of 5 m and oceanic water at 5, 210, and 800 m. The appearance of the counterillumination was more affected by the visual acuity of the viewer than by the clarity of the water, even at relatively large distances. Species with high visual acuity (0.11 degrees resolution) were able to distinguish the individual photophores of some counterilluminating signals at distances of several meters, thus breaking the camouflage. Depth and the presence or absence of moonlight strongly affected the spectrum of the background light, particularly near the surface. The increased variability near the surface was partially offset by the higher contrast attenuation at shallow depths, which reduced the sighting distance of mismatches. This research has implications for the study of spatial resolution, contrast sensitivity, and color discrimination in deep-sea visual systems.
An online analytical processing multi-dimensional data warehouse for malaria data
Madey, Gregory R; Vyushkov, Alexander; Raybaud, Benoit; Burkot, Thomas R; Collins, Frank H
2017-01-01
Abstract Malaria is a vector-borne disease that contributes substantially to the global burden of morbidity and mortality. The management of malaria-related data from heterogeneous, autonomous, and distributed data sources poses unique challenges and requirements. Although online data storage systems exist that address specific malaria-related issues, a globally integrated online resource to address different aspects of the disease does not exist. In this article, we describe the design, implementation, and applications of a multi-dimensional, online analytical processing data warehouse, named the VecNet Data Warehouse (VecNet-DW). It is the first online, globally-integrated platform that provides efficient search, retrieval and visualization of historical, predictive, and static malaria-related data, organized in data marts. Historical and static data are modelled using star schemas, while predictive data are modelled using a snowflake schema. The major goals, characteristics, and components of the DW are described along with its data taxonomy and ontology, the external data storage systems and the logical modelling and physical design phases. Results are presented as screenshots of a Dimensional Data browser, a Lookup Tables browser, and a Results Viewer interface. The power of the DW emerges from integrated querying of the different data marts and structuring those queries to the desired dimensions, enabling users to search, view, analyse, and store large volumes of aggregated data, and responding better to the increasing demands of users. Database URL https://dw.vecnet.org/datawarehouse/ PMID:29220463
ERIC Educational Resources Information Center
Mayer, Vicki
2003-01-01
Examines the local reception of global Spanish-language soap operas, or telenovelas. Explores how young people talked about Mexican telenovelas in daily life. Concludes that the telenovela, within certain limits, reflected some of the national, ethnic, gender, and class tensions that defined the viewers' identities as working-class, Mexican…
A midsummer-night's shock wave
NASA Astrophysics Data System (ADS)
Hargather, Michael; Liebner, Thomas; Settles, Gary
2007-11-01
The aerial pyrotechnic shells used in professional display fireworks explode a bursting charge at altitude in order to disperse the ``stars'' of the display. The shock wave from the bursting charge is heard on the ground as a loud report, though it has by then typically decayed to a mere sound wave. However, viewers seated near the standard safety borders can still be subjected to weak shock waves. These have been visualized using a large, portable, retro-reflective ``Edgerton'' shadowgraph technique and a high-speed digital video camera. Images recorded at 10,000 frames per second show essentially-planar shock waves from 10- and 15-cm firework shells impinging on viewers during the 2007 Central Pennsylvania July 4th Festival. The shock speed is not measurably above Mach 1, but we nonetheless conclude that, if one can sense a shock-like overpressure, then the wave motion is strong enough to be observed by density-sensitive optics.
Paek, Hye-Jin; Hove, Thomas; Jeon, Jehoon
2013-01-01
To explore the feasibility of social media for message testing, this study connects favorable viewer responses to antismoking videos on YouTube with the videos' message characteristics (message sensation value [MSV] and appeals), producer types, and viewer influences (viewer rating and number of viewers). Through multilevel modeling, a content analysis of 7,561 viewer comments on antismoking videos is linked with a content analysis of 87 antismoking videos. Based on a cognitive response approach, viewer comments are classified and coded as message-oriented thought, video feature-relevant thought, and audience-generated thought. The three mixed logit models indicate that videos with a greater number of viewers consistently increased the odds of favorable viewer responses, while those presenting humor appeals decreased the odds of favorable message-oriented and audience-generated thoughts. Some significant interaction effects show that videos produced by laypeople may hinder favorable viewer responses, while a greater number of viewer comments can work jointly with videos presenting threat appeals to predict favorable viewer responses. Also, for a more accurate understanding of audience responses to the messages, nuance cues should be considered together with message features and viewer influences.
Strategic Assessment for Arctic Observing, and the New Arctic Observing Viewer
NASA Astrophysics Data System (ADS)
Kassin, A.; Cody, R. P.; Manley, W. F.; Gaylord, A. G.; Dover, M.; Score, R.; Lin, D. H.; Villarreal, S.; Quezada, A.; Tweedie, C. E.
2013-12-01
Although a great deal of progress has been made with various Arctic Observing efforts, it can be difficult to assess that progress. What data collection efforts are established or under way? Where? By whom? To help meet the strategic needs of SEARCH-AON, SAON, and related initiatives, a new resource has been released: the Arctic Observing Viewer (AOV; http://ArcticObservingViewer.org). This web mapping application covers the 'who', 'what', 'where', and 'when' of data collection sites - wherever marine or terrestrial data are collected. Hundreds of sites are displayed, providing an overview as well as details. Users can visualize, navigate, select, search, draw, print, and more. This application currently showcases a subset of observational activities and will become more comprehensive with time. The AOV is founded on principles of interoperability, with an emerging metadata standard and compatible web service formats, such that participating agencies and organizations can use the AOV tools and services for their own purposes. In this way, the AOV will complement other cyber-resources, and will help science planners, funding agencies, PI's, and others to: assess status, identify overlap, fill gaps, assure sampling design, refine network performance, clarify directions, access data, coordinate logistics, collaborate, and more to meet Arctic Observing goals.
Introducing a Virtual Reality Experience in Anatomic Pathology Education.
Madrigal, Emilio; Prajapati, Shyam; Hernandez-Prera, Juan C
2016-10-01
A proper examination of surgical specimens is fundamental in anatomic pathology (AP) education. However, the resources available to residents may not always be suitable for efficient skill acquisition. We propose a method to enhance AP education by introducing high-definition videos featuring methods for appropriate specimen handling, viewable on two-dimensional (2D) and stereoscopic three-dimensional (3D) platforms. A stereo camera system recorded the gross processing of commonly encountered specimens. Three edited videos, with instructional audio voiceovers, were experienced by nine junior residents in a crossover study to assess the effects of the exposure (2D vs 3D movie views) on self-reported physiologic symptoms. A questionnaire was used to analyze viewer acceptance. All surveyed residents found the videos beneficial in preparation to examine a new specimen type. Viewer data suggest an improvement in specimen handling confidence and knowledge and enthusiasm toward 3D technology. None of the participants encountered significant motion sickness. Our novel method provides the foundation to create a robust teaching library. AP is inherently a visual discipline, and by building on the strengths of traditional teaching methods, our dynamic approach allows viewers to appreciate the procedural actions involved in specimen processing. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
An Empirical Study on Using Visual Embellishments in Visualization.
Borgo, R; Abdul-Rahman, A; Mohamed, F; Grant, P W; Reppa, I; Floridi, L; Chen, Min
2012-12-01
In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces "divided attention", and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.
Development of a Smart Mobile Data Module for Fetal Monitoring in E-Healthcare.
Houzé de l'Aulnoit, Agathe; Boudet, Samuel; Génin, Michaël; Gautier, Pierre-François; Schiro, Jessica; Houzé de l'Aulnoit, Denis; Beuscart, Régis
2018-03-23
The fetal heart rate (FHR) is a marker of fetal well-being in utero (when monitoring maternal and/or fetal pathologies) and during labor. Here, we developed a smart mobile data module for the remote acquisition and transmission (via a Wi-Fi or 4G connection) of FHR recordings, together with a web-based viewer for displaying the FHR datasets on a computer, smartphone or tablet. In order to define the features required by users, we modelled the fetal monitoring procedure (in home and hospital settings) via semi-structured interviews with midwives and obstetricians. Using this information, we developed a mobile data transfer module based on a Raspberry Pi. When connected to a standalone fetal monitor, the module acquires the FHR signal and sends it (via a Wi-Fi or a 3G/4G mobile internet connection) to a secure server within our hospital information system. The archived, digitized signal data are linked to the patient's electronic medical records. An HTML5/JavaScript web viewer converts the digitized FHR data into easily readable and interpretable graphs for viewing on a computer (running Windows, Linux or MacOS) or a mobile device (running Android, iOS or Windows Phone OS). The data can be viewed in real time or offline. The application includes tools required for correct interpretation of the data (signal loss calculation, scale adjustment, and precise measurements of the signal's characteristics). We performed a proof-of-concept case study of the transmission, reception and visualization of FHR data for a pregnant woman at 30 weeks of amenorrhea. She was hospitalized in the pregnancy assessment unit and FHR data were acquired three times a day with a Philips Avalon® FM30 fetal monitor. The prototype (Raspberry Pi) was connected to the fetal monitor's RS232 port. The emission and reception of prerecorded signals were tested and the web server correctly received the signals, and the FHR recording was visualized in real time on a computer, a tablet and smartphones (running Android and iOS) via the web viewer. This process did not perturb the hospital's computer network. There was no data delay or loss during a 60-min test. The web viewer was tested successfully in the various usage situations. The system was as user-friendly as expected, and enabled rapid, secure archiving. We have developed a system for the acquisition, transmission, recording and visualization of RCF data. Healthcare professionals can view the FHR data remotely on their computer, tablet or smartphone. Integration of FHR data into a hospital information system enables optimal, secure, long-term data archiving.
Interactive visual optimization and analysis for RFID benchmarking.
Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C
2009-01-01
Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.
ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks.more » Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.« less
Data Acquisition and Preparation for Social Network Analysis Based on Email: Lessons Learned
2009-06-01
Mrvar , A., and Batagelj , V. (2005), Exploratory Social Network Analysis with Pajek (Structural Analysis in the Social Sciences series). Cambridge, New...visualization of large networks. This program was developed by Vladimir Batagelj and Andrej Mrvar of the University of Ljubljana in Slovenia. Pajek evolved...theory, presumes Wasserman & Faust as foundation Amazon: 55% purchase rate among viewers 5. de Nooy, W., Mrvar , A., and Batagelj , V. (2005
Web-based metabolic network visualization with a zooming user interface
2011-01-01
Background Displaying complex metabolic-map diagrams, for Web browsers, and allowing users to interact with them for querying and overlaying expression data over them is challenging. Description We present a Web-based metabolic-map diagram, which can be interactively explored by the user, called the Cellular Overview. The main characteristic of this application is the zooming user interface enabling the user to focus on appropriate granularities of the network at will. Various searching commands are available to visually highlight sets of reactions, pathways, enzymes, metabolites, and so on. Expression data from single or multiple experiments can be overlaid on the diagram, which we call the Omics Viewer capability. The application provides Web services to highlight the diagram and to invoke the Omics Viewer. This application is entirely written in JavaScript for the client browsers and connect to a Pathway Tools Web server to retrieve data and diagrams. It uses the OpenLayers library to display tiled diagrams. Conclusions This new online tool is capable of displaying large and complex metabolic-map diagrams in a very interactive manner. This application is available as part of the Pathway Tools software that powers multiple metabolic databases including Biocyc.org: The Cellular Overview is accessible under the Tools menu. PMID:21595965
Introducing the ‘Science Myths Revealed’ Misconception Video Series
NASA Astrophysics Data System (ADS)
Eisenhamer, Bonnie; Villard, R.; Estacion, M.; Hassan, J.; Ryer, H.
2012-05-01
A misconception is a preconceived and inaccurate view of how the world works. There are many common science misconceptions held by students and the public alike about various topics in astronomy - including but not limited to galaxies, black holes, light and color, and the solar system. It is critical to identify and address misconceptions because they can stand in the way of new learning and impeded one’s ability to apply science principals meaningfully to everyday life. In response, the News and Education teams at the Space Telescope Science Institute worked in collaboration with a consultant to develop the “Science Myths Revealed” misconception video series. The purpose of this video series is to present common astronomy misconceptions in a brief and visually engaging manner while also presenting and reinforcing the truth of the universe and celestial phenomena within it. Viewers can the watch the videos to get more information about specific astronomy misconceptions as well as the facts to dispel them. Visual cues and demonstrations provide viewers with a more concrete representation of what are often abstract and misunderstood concepts - making the videos ideal as both engagement and instructional tools. Three videos in the series have been produced and are currently being field-tested within the education community.
Visualizing NetCDF Files by Using the EverVIEW Data Viewer
Conzelmann, Craig; Romañach, Stephanie S.
2010-01-01
Over the past few years, modelers in South Florida have started using Network Common Data Form (NetCDF) as the standard data container format for storing hydrologic and ecologic modeling inputs and outputs. With its origins in the meteorological discipline, NetCDF was created by the Unidata Program Center at the University Corporation for Atmospheric Research, in conjunction with the National Aeronautics and Space Administration and other organizations. NetCDF is a portable, scalable, self-describing, binary file format optimized for storing array-based scientific data. Despite attributes which make NetCDF desirable to the modeling community, many natural resource managers have few desktop software packages which can consume NetCDF and unlock the valuable data contained within. The U.S. Geological Survey and the Joint Ecosystem Modeling group, an ecological modeling community of practice, are working to address this need with the EverVIEW Data Viewer. Available for several operating systems, this desktop software currently supports graphical displays of NetCDF data as spatial overlays on a three-dimensional globe and views of grid-cell values in tabular form. An included Open Geospatial Consortium compliant, Web-mapping service client and charting interface allows the user to view Web-available spatial data as additional map overlays and provides simple charting visualizations of NetCDF grid values.
Microfilm Viewer Experiments. Final Report.
ERIC Educational Resources Information Center
Reintjes, J. F.; And Others
Two new designs for microfilm viewers are described. Both viewers are front projection viewers utilizing matte surface display screens. One viewer with an adjustable horizontal screen has a normal magnification rate and is mounted on a desk top. The other viewer has a high (4x) magnification rate in a mini-theater configuration with remote…
Programmable Remapper with Single Flow Architecture
NASA Technical Reports Server (NTRS)
Fisher, Timothy E. (Inventor)
1993-01-01
An apparatus for image processing comprising a camera for receiving an original visual image and transforming the original visual image into an analog image, a first converter for transforming the analog image of the camera to a digital image, a processor having a single flow architecture for receiving the digital image and producing, with a single algorithm, an output image, a second converter for transforming the digital image of the processor to an analog image, and a viewer for receiving the analog image, transforming the analog image into a transformed visual image for observing the transformations applied to the original visual image. The processor comprises one or more subprocessors for the parallel reception of a digital image for producing an output matrix of the transformed visual image. More particularly, the processor comprises a plurality of subprocessors for receiving in parallel and transforming the digital image for producing a matrix of the transformed visual image, and an output interface means for receiving the respective portions of the transformed visual image from the respective subprocessor for producing an output matrix of the transformed visual image.
Jiménez-Muñoz, Juan C.; Mattar, Cristian; Sobrino, José A.; Malhi, Yadvinder
2015-01-01
Advances in information technologies and accessibility to climate and satellite data in recent years have favored the development of web-based tools with user-friendly interfaces in order to facilitate the dissemination of geo/biophysical products. These products are useful for the analysis of the impact of global warming over different biomes. In particular, the study of the Amazon forest responses to drought have recently received attention by the scientific community due to the occurrence of two extreme droughts and sustained warming over the last decade. Thermal Amazoni@ is a web-based platform for the visualization and download of surface thermal anomalies products over the Amazon forest and adjacent intertropical oceans using Google Earth as a baseline graphical interface (http://ipl.uv.es/thamazon/web). This platform is currently operational at the servers of the University of Valencia (Spain), and it includes both satellite (MODIS) and climatic (ERA-Interim) datasets. Thermal Amazoni@ is composed of the viewer system and the web and ftp sites with ancillary information and access to product download. PMID:26029379
Jiménez-Muñoz, Juan C; Mattar, Cristian; Sobrino, José A; Malhi, Yadvinder
2015-01-01
Advances in information technologies and accessibility to climate and satellite data in recent years have favored the development of web-based tools with user-friendly interfaces in order to facilitate the dissemination of geo/biophysical products. These products are useful for the analysis of the impact of global warming over different biomes. In particular, the study of the Amazon forest responses to drought have recently received attention by the scientific community due to the occurrence of two extreme droughts and sustained warming over the last decade. Thermal Amazoni@ is a web-based platform for the visualization and download of surface thermal anomalies products over the Amazon forest and adjacent intertropical oceans using Google Earth as a baseline graphical interface (http://ipl.uv.es/thamazon/web). This platform is currently operational at the servers of the University of Valencia (Spain), and it includes both satellite (MODIS) and climatic (ERA-Interim) datasets. Thermal Amazoni@ is composed of the viewer system and the web and ftp sites with ancillary information and access to product download.
Effective and Accurate Colormap Selection
NASA Astrophysics Data System (ADS)
Thyng, K. M.; Greene, C. A.; Hetland, R. D.; Zimmerle, H.; DiMarco, S. F.
2016-12-01
Science is often communicated through plots, and design choices can elucidate or obscure the presented data. The colormap used can honestly and clearly display data in a visually-appealing way, or can falsely exaggerate data gradients and confuse viewers. Fortunately, there is a large resource of literature in color science on how color is perceived which we can use to inform our own choices. Following this literature, colormaps can be designed to be perceptually uniform; that is, so an equally-sized jump in the colormap at any location is perceived by the viewer as the same size. This ensures that gradients in the data are accurately percieved. The same colormap is often used to represent many different fields in the same paper or presentation. However, this can cause difficulty in quick interpretation of multiple plots. For example, in one plot the viewer may have trained their eye to recognize that red represents high salinity, and therefore higher density, while in the subsequent temperature plot they need to adjust their interpretation so that red represents high temperature and therefore lower density. In the same way that a single Greek letter is typically chosen to represent a field for a paper, we propose to choose a single colormap to represent a field in a paper, and use multiple colormaps for multiple fields. We have created a set of colormaps that are perceptually uniform, and follow several other design guidelines. There are 18 colormaps to give options to choose from for intuitive representation. For example, a colormap of greens may be used to represent chlorophyll concentration, or browns for turbidity. With careful consideration of human perception and design principles, colormaps may be chosen which faithfully represent the data while also engaging viewers.
Seeing voices of health disparity: evaluating arts projects as influence processes.
Parsons, Janet; Heus, Lineke; Moravac, Catherine
2013-02-01
Arts-informed approaches are increasingly popular as vehicles for research, knowledge translation and for engaging key stakeholders on topics of health and health care. This paper describes an evaluation of a multimedia art installation intended to promote awareness of health disparities as experienced by homeless persons living in Toronto (Canada). The objective of the evaluation was to determine whether the installation had an impact on audience members, and if so, to understand its influence on viewers' perspectives on homelessness and the health concerns of homeless persons. Key themes were identified through the analysis of direct observational data of viewer interactions with the exhibit and qualitative interviews with different audience members after the exhibit. The four key themes were: (1) Promoting recognition of common humanity between viewers and viewed (challenging previously held assumptions and stereotypes, narrowing perceived social distance); (2) functions fulfilled (or potentially fulfilled) by the exhibit: raising awareness, educational applications, and potential pathways by which the exhibit could serve as a call to social action; (3) stories that prompt more stories: the stories within the exhibit (coupled with the interview questions) prompted further sharing of stories amongst the evaluation respondents, highlighting the iterative nature of such approaches. Respondents told of recognizing similarities in the experiences recounted in the exhibit with their own interactions with homeless persons; (4) strengths and weaknesses identified: including aesthetic features, issues of audience 'reach' and the importance of suitable venues for exhibition. Theoretically informed by narrative analysis and visual anthropology, this evaluation demonstrates that arts-informed 'interventions' are highly complex and work in subtle ways on viewers, allowing them to re-imagine the lives of others and identify points of common interest. It also problematizes our assumptions about which outcomes matter and why. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Baldwin, R.; Ansari, S.; Reid, G.; Lott, N.; Del Greco, S.
2007-12-01
The main goal in developing and deploying Geographic Information System (GIS) services at NOAA's National Climatic Data Center (NCDC) is to provide users with simple access to data archives while integrating new and informative climate products. Several systems at NCDC provide a variety of climatic data in GIS formats and/or map viewers. The Online GIS Map Services provide users with data discovery options which flow into detailed product selection maps, which may be queried using standard "region finder" tools or gazetteer (geographical dictionary search) functions. Each tabbed selection offers steps to help users progress through the systems. A series of additional base map layers or data types have been added to provide companion information. New map services include: Severe Weather Data Inventory, Local Climatological Data, Divisional Data, Global Summary of the Day, and Normals/Extremes products. THREDDS Data Server technology is utilized to provide access to gridded multidimensional datasets such as Model, Satellite and Radar. This access allows users to download data as a gridded NetCDF file, which is readable by ArcGIS. In addition, users may subset the data for a specific geographic region, time period, height range or variable prior to download. The NCDC Weather Radar Toolkit (WRT) is a client tool which accesses Weather Surveillance Radar 1988 Doppler (WSR-88D) data locally or remotely from the NCDC archive, NOAA FTP server or any URL or THREDDS Data Server. The WRT Viewer provides tools for custom data overlays, Web Map Service backgrounds, animations and basic filtering. The export of images and movies is provided in multiple formats. The WRT Data Exporter allows for data export in both vector polygon (Shapefile, Well-Known Text) and raster (GeoTIFF, ESRI Grid, VTK, NetCDF, GrADS) formats. As more users become accustom to GIS, questions of better, cheaper, faster access soon follow. Expanding use and availability can best be accomplished through standards which promote interoperability. Our GIS related products provide Open Geospatial Consortium (OGC) compliant Web Map Services (WMS), Web Feature Services (WFS), Web Coverage Services (WCS) and Federal Geographic Data Committee (FGDC) metadata as a complement to the map viewers. KML/KMZ data files (soon to be compliant OGC specifications) also provide access.
EvolView, an online tool for visualizing, annotating and managing phylogenetic trees.
Zhang, Huangkai; Gao, Shenghan; Lercher, Martin J; Hu, Songnian; Chen, Wei-Hua
2012-07-01
EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html.
EvolView, an online tool for visualizing, annotating and managing phylogenetic trees
Zhang, Huangkai; Gao, Shenghan; Lercher, Martin J.; Hu, Songnian; Chen, Wei-Hua
2012-01-01
EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html. PMID:22695796
NASA Astrophysics Data System (ADS)
Lammers, M.
2016-12-01
Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, pre-rendered animations, or cumbersome geoservers. These methods can limit interactivity and/or place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite observed them on and above the Earth's surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.
NASA Technical Reports Server (NTRS)
Lammers, Matthew
2016-01-01
Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, prerendered animations, or cumbersome geoservers. These methods can limit interactivity andor place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite-observed them on and above the Earths surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.
Cosmic cookery: making a stereoscopic 3D animated movie
NASA Astrophysics Data System (ADS)
Holliman, Nick; Baugh, Carlton; Frenk, Carlos; Jenkins, Adrian; Froner, Barbara; Hassaine, Djamel; Helly, John; Metcalfe, Nigel; Okamoto, Takashi
2006-02-01
This paper describes our experience making a short stereoscopic movie visualizing the development of structure in the universe during the 13.7 billion years from the Big Bang to the present day. Aimed at a general audience for the Royal Society's 2005 Summer Science Exhibition, the movie illustrates how the latest cosmological theories based on dark matter and dark energy are capable of producing structures as complex as spiral galaxies and allows the viewer to directly compare observations from the real universe with theoretical results. 3D is an inherent feature of the cosmology data sets and stereoscopic visualization provides a natural way to present the images to the viewer, in addition to allowing researchers to visualize these vast, complex data sets. The presentation of the movie used passive, linearly polarized projection onto a 2m wide screen but it was also required to playback on a Sharp RD3D display and in anaglyph projection at venues without dedicated stereoscopic display equipment. Additionally lenticular prints were made from key images in the movie. We discuss the following technical challenges during the stereoscopic production process; 1) Controlling the depth presentation, 2) Editing the stereoscopic sequences, 3) Generating compressed movies in display specific formats. We conclude that the generation of high quality stereoscopic movie content using desktop tools and equipment is feasible. This does require careful quality control and manual intervention but we believe these overheads are worthwhile when presenting inherently 3D data as the result is significantly increased impact and better understanding of complex 3D scenes.
Vergence–accommodation conflicts hinder visual performance and cause visual fatigue
Hoffman, David M.; Girshick, Ahna R.; Akeley, Kurt; Banks, Martin S.
2010-01-01
Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the real scenes the displays depict. A likely cause of such distortions is the fact that computer displays present images on one surface. Thus, focus cues—accommodation and blur in the retinal image—specify the depth of the display rather than the depths in the depicted scene. Additionally, the uncoupling of vergence and accommodation required by 3D displays frequently reduces one’s ability to fuse the binocular stimulus and causes discomfort and fatigue for the viewer. We have developed a novel 3D display that presents focus cues that are correct or nearly correct for the depicted scene. We used this display to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue. We show that when focus cues are correct or nearly correct, (1) the time required to identify a stereoscopic stimulus is reduced, (2) stereoacuity in a time-limited task is increased, (3) distortions in perceived depth are reduced, and (4) viewer fatigue and discomfort are reduced. We discuss the implications of this work for vision research and the design and use of displays. PMID:18484839
Design of smart home sensor visualizations for older adults.
Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George
2014-01-01
Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date. Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.
Design of smart home sensor visualizations for older adults.
Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George
2014-07-24
Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date.CONCLUSIONS: Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.
Cosmography and Data Visualization
NASA Astrophysics Data System (ADS)
Pomarède, Daniel; Courtois, Hélène M.; Hoffman, Yehuda; Tully, R. Brent
2017-05-01
Cosmography, the study and making of maps of the universe or cosmos, is a field where visual representation benefits from modern three-dimensional visualization techniques and media. At the extragalactic distance scales, visualization is contributing to our understanding of the complex structure of the local universe in terms of spatial distribution and flows of galaxies and dark matter. In this paper, we report advances in the field of extragalactic cosmography obtained using the SDvision visualization software in the context of the Cosmicflows Project. Here, multiple visualization techniques are applied to a variety of data products: catalogs of galaxy positions and galaxy peculiar velocities, reconstructed velocity field, density field, gravitational potential field, velocity shear tensor viewed in terms of its eigenvalues and eigenvectors, envelope surfaces enclosing basins of attraction. These visualizations, implemented as high-resolution images, videos, and interactive viewers, have contributed to a number of studies: the cosmography of the local part of the universe, the nature of the Great Attractor, the discovery of the boundaries of our home supercluster of galaxies Laniakea, the mapping of the cosmic web, and the study of attractors and repellers.
Multi-viewer tracking integral imaging system and its viewing zone analysis.
Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho
2009-09-28
We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.
The effects of alphabet and expertise on letter perception
Wiley, Robert W.; Wilson, Colin; Rapp, Brenda
2016-01-01
Long-standing questions in human perception concern the nature of the visual features that underlie letter recognition and the extent to which the visual processing of letters is affected by differences in alphabets and levels of viewer expertise. We examined these issues in a novel approach using a same-different judgment task on pairs of letters from the Arabic alphabet with two participant groups—one with no prior exposure to Arabic and one with reading proficiency. Hierarchical clustering and linear mixed-effects modeling of reaction times and accuracy provide evidence that both the specific characteristics of the alphabet and observers’ previous experience with it affect how letters are perceived and visually processed. The findings of this research further our understanding of the multiple factors that affect letter perception and support the view of a visual system that dynamically adjusts its weighting of visual features as expert readers come to more efficiently and effectively discriminate the letters of the specific alphabet they are viewing. PMID:26913778
Subsurface data visualization in Virtual Reality
NASA Astrophysics Data System (ADS)
Krijnen, Robbert; Smelik, Ruben; Appleton, Rick; van Maanen, Peter-Paul
2017-04-01
Due to their increasing complexity and size, visualization of geological data is becoming more and more important. It enables detailed examining and reviewing of large volumes of geological data and it is often used as a communication tool for reporting and education to demonstrate the importance of the geology to policy makers. In the Netherlands two types of nation-wide geological models are available: 1) Layer-based models in which the subsurface is represented by a series of tops and bases of geological or hydrogeological units, and 2) Voxel models in which the subsurface is subdivided in a regular grid of voxels that can contain different properties per voxel. The Geological Survey of the Netherlands (GSN) provides an interactive web portal that delivers maps and vertical cross-sections of such layer-based and voxel models. From this portal you can download a 3D subsurface viewer that can visualize the voxel model data of an area of 20 × 25 km with 100 × 100 × 5 meter voxel resolution on a desktop computer. Virtual Reality (VR) technology enables us to enhance the visualization of this volumetric data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of-the-shelf VR hardware enabled us to create an new intuitive and low visualization tool. A VR viewer has been implemented using the HTC Vive head set and allows visualization and analysis of the GSN voxel model data with geological or hydrogeological units. The user can navigate freely around the voxel data (20 × 25 km) which is presented in a virtual room at a scale of 2 × 2 or 3 × 3 meters. To enable analysis, e.g. hydraulic conductivity, the user can select filters to remove specific hydrogeological units. The user can also use slicing to cut-off specific sections of the voxel data to get a closer look. This slicing can be done in any direction using a 'virtual knife'. Future plans are to further improve performance from 30 up to 90 Hz update rate to reduce possible motion sickness, add more advanced filtering capabilities as well as a multi user setup, annotation capabilities and visualizing of historical data.
NASA Astrophysics Data System (ADS)
Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel
2017-03-01
Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.
Bringing "Scientific Expeditions" Into the Schools
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as simulations or measurements of fluid dynamics). The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics (CFD) and wind tunnel testing. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualiZation of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: 1. The visual is much higher in resolution (1280xl024 pixels with 24 bits of color) than typical video format transmitted over the network. 2. The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). 3. A rich variety of guided expeditions through the data can be included easily. 4. A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. 5. The scenes can be viewed in 3D using stereo vision. 6. The network bandwidth used for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.)
Fast 3D Net Expeditions: Tools for Effective Scientific Collaboration on the World Wide Web
NASA Technical Reports Server (NTRS)
Watson, Val; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D (three dimensional), high resolution, dynamic, interactive viewing of scientific data. The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG (Motion Picture Expert Group) movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewers local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: (1) The visual is much higher in resolution (1280x1024 pixels with 24 bits of color) than typical video format transmitted over the network. (2) The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). (3) A rich variety of guided expeditions through the data can be included easily. (4) A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. (5) The scenes can be viewed in 3D using stereo vision. (6) The network bandwidth for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.) This talk will illustrate the use of these new technologies and present a proposal for using these technologies to improve science education.
ERIC Educational Resources Information Center
Van Hoof, Sarah
2018-01-01
In the globalized economy, old metadiscursive regimes have been challenged by new conditions which are often considered to be more favourable to heteroglossic practices. In Flemish Belgium, the liberalization of the TV market is said to have transformed the broadcaster VRT from a public service aiming at educating viewers into a competitive…
Tablet—next generation sequence assembly visualization
Milne, Iain; Bayer, Micha; Cardle, Linda; Shaw, Paul; Stephen, Gordon; Wright, Frank; Marshall, David
2010-01-01
Summary: Tablet is a lightweight, high-performance graphical viewer for next-generation sequence assemblies and alignments. Supporting a range of input assembly formats, Tablet provides high-quality visualizations showing data in packed or stacked views, allowing instant access and navigation to any region of interest, and whole contig overviews and data summaries. Tablet is both multi-core aware and memory efficient, allowing it to handle assemblies containing millions of reads, even on a 32-bit desktop machine. Availability: Tablet is freely available for Microsoft Windows, Apple Mac OS X, Linux and Solaris. Fully bundled installers can be downloaded from http://bioinf.scri.ac.uk/tablet in 32- and 64-bit versions. Contact: tablet@scri.ac.uk PMID:19965881
Subjective quality evaluation of low-bit-rate video
NASA Astrophysics Data System (ADS)
Masry, Mark; Hemami, Sheila S.; Osberger, Wilfried M.; Rohaly, Ann M.
2001-06-01
A subjective quality evaluation was performed to qualify vie4wre responses to visual defects that appear in low bit rate video at full and reduced frame rates. The stimuli were eight sequences compressed by three motion compensated encoders - Sorenson Video, H.263+ and a Wavelet based coder - operating at five bit/frame rate combinations. The stimulus sequences exhibited obvious coding artifacts whose nature differed across the three coders. The subjective evaluation was performed using the Single Stimulus Continuos Quality Evaluation method of UTI-R Rec. BT.500-8. Viewers watched concatenated coded test sequences and continuously registered the perceived quality using a slider device. Data form 19 viewers was colleted. An analysis of their responses to the presence of various artifacts across the range of possible coding conditions and content is presented. The effects of blockiness and blurriness on perceived quality are examined. The effects of changes in frame rate on perceived quality are found to be related to the nature of the motion in the sequence.
Women's reactions to sexually aggressive mass media depictions.
Krafka, C; Linz, D; Donnerstein, E; Penrod, S
1997-04-01
This study examines the potential harm of sexually explicit and/or violent films to women viewers. More specifically, it investigates the idea that the visual media contribute to a cultural climate that is supportive of attitudes facilitating violence against women, diminish concern for female victims (desensitization), and have a negative impact on women's views of themselves. In this study, women viewed 1 film per day for 4 consecutive days from one of these 3 categories: 1) sexually explicit but nonviolent; 2) sexually explicit, sexually violent; and 3) mildly sexually explicit, graphically violent. They then served as jurors in a simulated rape trial. The study found that exposure to both types of violent stimuli produced desensitization and ratings of the stimuli as less degrading to women. Moreover, women exposed to the mildly sexually explicit, graphically violent images were less sensitive toward the victim in the rape trial compared with the other film viewers. However, no differences were found between the film groups and the no-exposure control group with regard to women¿s self-perception.
Eye contact perception in the West and East: a cross-cultural study.
Uono, Shota; Hietanen, Jari K
2015-01-01
This study investigated whether eye contact perception differs in people with different cultural backgrounds. Finnish (European) and Japanese (East Asian) participants were asked to determine whether Finnish and Japanese neutral faces with various gaze directions were looking at them. Further, participants rated the face stimuli for emotion and other affect-related dimensions. The results indicated that Finnish viewers had a smaller bias toward judging slightly averted gazes as directed at them when judging Finnish rather than Japanese faces, while the bias of Japanese viewers did not differ between faces from their own and other cultural backgrounds. This may be explained by Westerners experiencing more eye contact in their daily life leading to larger visual experience of gaze perception generally, and to more accurate perception of eye contact with people from their own cultural background particularly. The results also revealed cultural differences in the perception of emotion from neutral faces that could also contribute to the bias in eye contact perception.
Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics
NASA Astrophysics Data System (ADS)
Hošek, Petr; Spiwok, Vojtěch
2016-01-01
Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.
A lighting metric for quantitative evaluation of accent lighting systems
NASA Astrophysics Data System (ADS)
Acholo, Cyril O.; Connor, Kenneth A.; Radke, Richard J.
2014-09-01
Accent lighting is critical for artwork and sculpture lighting in museums, and subject lighting for stage, Film and television. The research problem of designing effective lighting in such settings has been revived recently with the rise of light-emitting-diode-based solid state lighting. In this work, we propose an easy-to-apply quantitative measure of the scene's visual quality as perceived by human viewers. We consider a well-accent-lit scene as one which maximizes the information about the scene (in an information-theoretic sense) available to the user. We propose a metric based on the entropy of the distribution of colors, which are extracted from an image of the scene from the viewer's perspective. We demonstrate that optimizing the metric as a function of illumination configuration (i.e., position, orientation, and spectral composition) results in natural, pleasing accent lighting. We use a photorealistic simulation tool to validate the functionality of our proposed approach, showing its successful application to two- and three-dimensional scenes.
Impact of Visual Context on Public Perceptions of Non-Human Primate Performers
Leighty, Katherine A.; Valuska, Annie J.; Grand, Alison P.; Bettinger, Tamara L.; Mellen, Jill D.; Ross, Stephen R.; Boyle, Paul; Ogden, Jacqueline J.
2015-01-01
Prior research has shown that the use of apes, specifically chimpanzees, as performers in the media negatively impacts public attitudes of their conservation status and desirability as a pet, yet it is unclear whether these findings generalize to other non-human primates (specifically non-ape species). We evaluated the impact of viewing an image of a monkey or prosimian in an anthropomorphic or naturalistic setting, either in contact with or in the absence of a human. Viewing the primate in an anthropomorphic setting while in contact with a person significantly increased their desirability as a pet, which also correlated with increased likelihood of believing the animal was not endangered. The majority of viewers felt that the primates in all tested images were “nervous.” When shown in contact with a human, viewers felt they were “sad” and “scared”, while also being less “funny.” Our findings highlight the potential broader implications of the use of non-human primate performers by the entertainment industry. PMID:25714101
Impact of visual context on public perceptions of non-human primate performers.
Leighty, Katherine A; Valuska, Annie J; Grand, Alison P; Bettinger, Tamara L; Mellen, Jill D; Ross, Stephen R; Boyle, Paul; Ogden, Jacqueline J
2015-01-01
Prior research has shown that the use of apes, specifically chimpanzees, as performers in the media negatively impacts public attitudes of their conservation status and desirability as a pet, yet it is unclear whether these findings generalize to other non-human primates (specifically non-ape species). We evaluated the impact of viewing an image of a monkey or prosimian in an anthropomorphic or naturalistic setting, either in contact with or in the absence of a human. Viewing the primate in an anthropomorphic setting while in contact with a person significantly increased their desirability as a pet, which also correlated with increased likelihood of believing the animal was not endangered. The majority of viewers felt that the primates in all tested images were "nervous." When shown in contact with a human, viewers felt they were "sad" and "scared", while also being less "funny." Our findings highlight the potential broader implications of the use of non-human primate performers by the entertainment industry.
What is stereoscopic vision good for?
NASA Astrophysics Data System (ADS)
Read, Jenny C. A.
2015-03-01
Stereo vision is a resource-intensive process. Nevertheless, it has evolved in many animals including mammals, birds, amphibians and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing. Stereo vision also has uses in ophthalmology. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle University, we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.
Quantifying how the combination of blur and disparity affects the perceived depth
NASA Astrophysics Data System (ADS)
Wang, Junle; Barkowsky, Marcus; Ricordel, Vincent; Le Callet, Patrick
2011-03-01
The influence of a monocular depth cue, blur, on the apparent depth of stereoscopic scenes will be studied in this paper. When 3D images are shown on a planar stereoscopic display, binocular disparity becomes a pre-eminent depth cue. But it induces simultaneously the conflict between accommodation and vergence, which is often considered as a main reason for visual discomfort. If we limit this visual discomfort by decreasing the disparity, the apparent depth also decreases. We propose to decrease the (binocular) disparity of 3D presentations, and to reinforce (monocular) cues to compensate the loss of perceived depth and keep an unaltered apparent depth. We conducted a subjective experiment using a twoalternative forced choice task. Observers were required to identify the larger perceived depth in a pair of 3D images with/without blur. By fitting the result to a psychometric function, we obtained points of subjective equality in terms of disparity. We found that when blur is added to the background of the image, the viewer can perceive larger depth comparing to the images without any blur in the background. The increase of perceived depth can be considered as a function of the relative distance between the foreground and background, while it is insensitive to the distance between the viewer and the depth plane at which the blur is added.
EdgeMaps: visualizing explicit and implicit relations
NASA Astrophysics Data System (ADS)
Dörk, Marian; Carpendale, Sheelagh; Williamson, Carey
2011-01-01
In this work, we introduce EdgeMaps as a new method for integrating the visualization of explicit and implicit data relations. Explicit relations are specific connections between entities already present in a given dataset, while implicit relations are derived from multidimensional data based on shared properties and similarity measures. Many datasets include both types of relations, which are often difficult to represent together in information visualizations. Node-link diagrams typically focus on explicit data connections, while not incorporating implicit similarities between entities. Multi-dimensional scaling considers similarities between items, however, explicit links between nodes are not displayed. In contrast, EdgeMaps visualize both implicit and explicit relations by combining and complementing spatialization and graph drawing techniques. As a case study for this approach we chose a dataset of philosophers, their interests, influences, and birthdates. By introducing the limitation of activating only one node at a time, interesting visual patterns emerge that resemble the aesthetics of fireworks and waves. We argue that the interactive exploration of these patterns may allow the viewer to grasp the structure of a graph better than complex node-link visualizations.
NASA Astrophysics Data System (ADS)
Achtor, T. H.; Rink, T.
2010-12-01
The University of Wisconsin’s Space Science and Engineering Center (SSEC) has been at the forefront in developing data analysis and visualization tools for environmental satellites and other geophysical data. The fifth generation of the Man-computer Interactive Data Access System (McIDAS-V) is Java-based, open-source, freely available software that operates on Linux, Macintosh and Windows systems. The software tools provide powerful new data manipulation and visualization capabilities that work with geophysical data in research, operational and educational environments. McIDAS-V provides unique capabilities to support innovative techniques for evaluating research results, teaching and training. McIDAS-V is based on three powerful software elements. VisAD is a Java library for building interactive, collaborative, 4 dimensional visualization and analysis tools. The Integrated Data Viewer (IDV) is a reference application based on the VisAD system and developed by the Unidata program that demonstrates the flexibility that is needed in this evolving environment, using a modern, object-oriented software design approach. The third tool, HYDRA, allows users to build, display and interrogate multi and hyperspectral environmental satellite data in powerful ways. The McIDAS-V software is being used for training and education in several settings. The McIDAS User Group provides training workshops at its annual meeting. Numerous online tutorials with training data sets have been developed to aid users in learning simple and more complex operations in McIDAS-V, all are available online. In a University of Wisconsin-Madison undergraduate course in Radar and Satellite Meteorology, McIDAS-V is used to create and deliver laboratory exercises using case study and real time data. At the high school level, McIDAS-V is used in several exercises in our annual Summer Workshop in Earth and Atmospheric Sciences to provide young scientists the opportunity to examine data with friendly and powerful tools. This presentation will describe the McIDAS-V software and demonstrate some of the capabilities of McIDAS-V to analyze and display many types of global data. The presentation will also focus on describing how McIDAS-V can be used as an educational window to examine global geophysical data. Consecutive polar orbiting passes of NASA MODIS and CALIPSO observations
Game On, Science - How Video Game Technology May Help Biologists Tackle Visualization Challenges
Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc
2013-01-01
The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/. PMID:23483961
A tool for multi-scale modelling of the renal nephron
Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.
2011-01-01
We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210
Game on, science - how video game technology may help biologists tackle visualization challenges.
Lv, Zhihan; Tek, Alex; Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc
2013-01-01
The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/.
United States Air Force High School Apprenticeship Program: 1989 Program Management Report. Volume 3
1988-12-01
orrtent of visual or auditory stimulus exposed to the eyes o- oays -t a level below normal threshold, it is possible to perceive the subliminal stimuli...usually to small or vague to be consciously recognized, but they are declared to influence the 87-6 viewer’s subconsc’ious sex drive. Stimulation below...programs the mechanisms to stimulate career interests in science and technology in high school students showing promise in these areas. The Air Force High
Video display engineering and optimization system
NASA Technical Reports Server (NTRS)
Larimer, James (Inventor)
1997-01-01
A video display engineering and optimization CAD simulation system for designing a LCD display integrates models of a display device circuit, electro-optics, surface geometry, and physiological optics to model the system performance of a display. This CAD system permits system performance and design trade-offs to be evaluated without constructing a physical prototype of the device. The systems includes a series of modules which permit analysis of design trade-offs in terms of their visual impact on a viewer looking at a display.
Posters as an educational strategy.
Duchin, S; Sherwood, G
1990-01-01
Posters are visual aids that are well suited to use as independent sources of information or as support for other presentation formats. By design, the message displayed is brief, constant, and interactive with the viewer. Guidelines for developing a poster include careful delineation of content, knowledge of audience needs, and the environment or setting for the poster. The application of basic design elements, such as simplicity of composition, attractive color combinations, and title spacings, results in a presentation mode that is both attracting and lingering.
Transgressive sexualities: politics of pleasure and desire in Kamasutra: a tale of love and fire.
Lohani-Chase, Rama
2012-01-01
Utilizing feminist film theory, critical reviews, and viewer responses, this article examines visual representations of transgressive sexuality in two diasporic Indian women's films: Kamasutra: A Tale of Love by Mira Nair, and Fire by Deepa Mehta. The article draws from research on ancient discourses on sexuality in India to argue that contemporary constructions of women's sexuality in South Asia are not devoid of patriarchal and fundamentalist cultural politics of representation. Copyright © Taylor & Francis Group, LLC
Investigation of visual fatigue/discomfort generated by S3D video using eye-tracking data
NASA Astrophysics Data System (ADS)
Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine
2013-03-01
Stereoscopic 3D is undoubtedly one of the most attractive content. It has been deployed intensively during the last decade through movies and games. Among the advantages of 3D are the strong involvement of viewers and the increased feeling of presence. However, the sanitary e ects that can be generated by 3D are still not precisely known. For example, visual fatigue and visual discomfort are among symptoms that an observer may feel. In this paper, we propose an investigation of visual fatigue generated by 3D video watching, with the help of eye-tracking. From one side, a questionnaire, with the most frequent symptoms linked with 3D, is used in order to measure their variation over time. From the other side, visual characteristics such as pupil diameter, eye movements ( xations and saccades) and eye blinking have been explored thanks to data provided by the eye-tracker. The statistical analysis showed an important link between blinking duration and number of saccades with visual fatigue while pupil diameter and xations are not precise enough and are highly dependent on content. Finally, time and content play an important role in the growth of visual fatigue due to 3D watching.
Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.
Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M
2010-01-01
Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.
Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.
Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn
2016-12-21
The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybrid-dimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies – Three.js, D3.js and PHP – as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.
Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.
Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn
2016-10-01
The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybriddimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies - Three.js, D3.js and PHP - as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.
Support for fast comprehension of ICU data: visualization using metaphor graphics.
Horn, W; Popow, C; Unterasinger, L
2001-01-01
The time-oriented analysis of electronic patient records on (neonatal) intensive care units is a tedious and time-consuming task. Graphic data visualization should make it easier for physicians to assess the overall situation of a patient and to recognize essential changes over time. Metaphor graphics are used to sketch the most relevant parameters for characterizing a patient's situation. By repetition of the graphic object in 24 frames the situation of the ICU patient is presented in one display, usually summarizing the last 24 h. VIE-VISU is a data visualization system which uses multiples to present the change in the patient's status over time in graphic form. Each multiple is a highly structured metaphor graphic object. Each object visualizes important ICU parameters from circulation, ventilation, and fluid balance. The design using multiples promotes a focus on stability and change. A stable patient is recognizable at first sight, continuous improvement or worsening condition are easy to analyze, drastic changes in the patient's situation get the viewers attention immediately.
A Lithology Based Map Unit Schema For Onegeology Regional Geologic Map Integration
NASA Astrophysics Data System (ADS)
Moosdorf, N.; Richard, S. M.
2012-12-01
A system of lithogenetic categories for a global lithological map (GLiM, http://www.ifbm.zmaw.de/index.php?id=6460&L=3) has been compiled based on analysis of lithology/genesis categories for regional geologic maps for the entire globe. The scheme is presented for discussion and comment. Analysis of units on a variety of regional geologic maps indicates that units are defined based on assemblages of rock types, as well as their genetic type. In this compilation of continental geology, outcropping surface materials are dominantly sediment/sedimentary rock; major subdivisions of the sedimentary category include clastic sediment, carbonate sedimentary rocks, clastic sedimentary rocks, mixed carbonate and clastic sedimentary rock, colluvium and residuum. Significant areas of mixed igneous and metamorphic rock are also present. A system of global categories to characterize the lithology of regional geologic units is important for Earth System models of matter fluxes to soils, ecosystems, rivers and oceans, and for regional analysis of Earth surface processes at global scale. Because different applications of the classification scheme will focus on different lithologic constituents in mixed units, an ontology-type representation of the scheme that assigns properties to the units in an analyzable manner will be pursued. The OneGeology project is promoting deployment of geologic map services at million scale for all nations. Although initial efforts are commonly simple scanned map WMS services, the intention is to move towards data-based map services that categorize map units with standard vocabularies to allow use of a common map legend for better visual integration of the maps (e.g. see OneGeology Europe, http://onegeology-europe.brgm.fr/ geoportal/ viewer.jsp). Current categorization of regional units with a single lithology from the CGI SimpleLithology (http://resource.geosciml.org/201202/ Vocab2012html/ SimpleLithology201012.html) vocabulary poorly captures the lithologic character of such units in a meaningful way. A lithogenetic unit category scheme accessible as a GeoSciML-portrayal-based OGC Styled Layer Description resource is key to enabling OneGeology (http://oneGeology.org) geologic map services to achieve a high degree of visual harmonization.
Beyond Narratives: "Free Drawings" as Visual Data in Addiction Research.
Klingemann, Justyna; Klingemann, Harald
2016-05-11
The study presented here explores the usefulness of visual data when assessing addiction careers from various methodological perspectives. The database consists of 14 "free life-course drawings" produced by seven Swiss and seven Polish male alcohol ex-users, and their life history narratives collected in the context of earlier studies on self-change. The analysis follows the principles of the Barthian visual semiotics approach including the author and the viewer perspective. This is followed by the investigation of the interplay between drawings and narratives in Polish and German. Compared to the detailed narratives following few sub-storylines at the same time, the drawings provide a more coherent and differentiated overall picture especially of the emotional state over the life course: the relative subjective importance of highs and lows; and clearer visualisation of mixed positive and negative feelings; as well as identity concepts, such as the interplay between Mead's I & me.
Gutman, David A.; Dunn, William D.; Cobb, Jake; Stoner, Richard M.; Kalpathy-Cramer, Jayashree; Erickson, Bradley
2014-01-01
Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance. PMID:24904399
Tayler, Laramie D
2005-05-01
Previous studies of the effects of sexual television content have resulted in mixed findings. Based on the information processing model of media effects, I proposed that the messages embodied n such content, the degree to which viewers perceive television content as realistic, and whether sexual content is conveyed using visual or verbal symbols may influence the nature or degree of such effects. I explored this possibility through an experiment in which 182 college undergraduates were exposed to visual or verbal sexual television content, neutral television content, or no television at all prior to completing measures of sexual attitudes and beliefs. Although exposure to sexual content generally did not produce significant main effects, it did influence the attitudes of those who perceive television to be relatively realistic. Verbal sexual content was found to influence beliefs about women's sexual activity among the same group.
Thinking in z-space: flatness and spatial narrativity
NASA Astrophysics Data System (ADS)
Zone, Ray
2012-03-01
Now that digital technology has accessed the Z-space in cinema, narrative artistry is at a loss. Motion picture professionals no longer can readily resort to familiar tools. A new language and new linguistics for Z-axis storytelling are necessary. After first examining the roots of monocular thinking in painting, prior modes of visual narrative in twodimensional cinema obviating true binocular stereopsis can be explored, particularly montage, camera motion and depth of field, with historic examples. Special attention is paid to the manner in which monocular cues for depth have been exploited to infer depth on a planar screen. Both the artistic potential and visual limitations of actual stereoscopic depth as a filmmaking language are interrogated. After an examination of the historic basis of monocular thinking in visual culture, a context for artistic exploration of the use of the z-axis as a heightened means of creating dramatic and emotional impact upon the viewer is illustrated.
MSAViewer: interactive JavaScript visualization of multiple sequence alignments.
Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E; Rost, Burkhard; Goldberg, Tatyana
2016-11-15
The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is 'web ready': written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/Supplementary information: Supplementary data are available at Bioinformatics online. msa@bio.sh. © The Author 2016. Published by Oxford University Press.
MSAViewer: interactive JavaScript visualization of multiple sequence alignments
Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E.; Rost, Burkhard; Goldberg, Tatyana
2016-01-01
Summary: The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is ‘web ready’: written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. Availability and Implementation: The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: msa@bio.sh PMID:27412096
Content dependent selection of image enhancement parameters for mobile displays
NASA Astrophysics Data System (ADS)
Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo
2011-01-01
Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.
Transmission and visualization of large geographical maps
NASA Astrophysics Data System (ADS)
Zhang, Liqiang; Zhang, Liang; Ren, Yingchao; Guo, Zhifeng
Transmission and visualization of large geographical maps have become a challenging research issue in GIS applications. This paper presents an efficient and robust way to simplify large geographical maps using frame buffers and Voronoi diagrams. The topological relationships are kept during the simplification by removing the Voronoi diagram's self-overlapped regions. With the simplified vector maps, we establish different levels of detail (LOD) models of these maps. Then we introduce a client/server architecture which integrates our out-of-core algorithm, progressive transmission and rendering scheme based on computer graphics hardware. The architecture allows the viewers to view different regions interactively at different LODs on the network. Experimental results show that our proposed scheme provides an effective way for powerful transmission and manipulation of large maps.
Solano-Román, Antonio; Alfaro-Arias, Verónica; Cruz-Castillo, Carlos; Orozco-Solano, Allan
2018-03-15
VizGVar was designed to meet the growing need of the research community for improved genomic and proteomic data viewers that benefit from better information visualization. We implemented a new information architecture and applied user centered design principles to provide a new improved way of visualizing genetic information and protein data related to human disease. VizGVar connects the entire database of Ensembl protein motifs, domains, genes and exons with annotated SNPs and somatic variations from PharmGKB and COSMIC. VizGVar precisely represents genetic variations and their respective location by colored curves to designate different types of variations. The structured hierarchy of biological data is reflected in aggregated patterns through different levels, integrating several layers of information at once. VizGVar provides a new interactive, web-based JavaScript visualization of somatic mutations and protein variation, enabling fast and easy discovery of clinically relevant variation patterns. VizGVar is accessible at http://vizport.io/vizgvar; http://vizport.io/vizgvar/doc/. asolano@broadinstitute.org or allan.orozcosolano@ucr.ac.cr.
Genoviz Software Development Kit: Java tool kit for building genomics visualization applications.
Helt, Gregg A; Nicol, John W; Erwin, Ed; Blossom, Eric; Blanchard, Steven G; Chervitz, Stephen A; Harmon, Cyrus; Loraine, Ann E
2009-08-25
Visualization software can expose previously undiscovered patterns in genomic data and advance biological science. The Genoviz Software Development Kit (SDK) is an open source, Java-based framework designed for rapid assembly of visualization software applications for genomics. The Genoviz SDK framework provides a mechanism for incorporating adaptive, dynamic zooming into applications, a desirable feature of genome viewers. Visualization capabilities of the Genoviz SDK include automated layout of features along genetic or genomic axes; support for user interactions with graphical elements (Glyphs) in a map; a variety of Glyph sub-classes that promote experimentation with new ways of representing data in graphical formats; and support for adaptive, semantic zooming, whereby objects change their appearance depending on zoom level and zooming rate adapts to the current scale. Freely available demonstration and production quality applications, including the Integrated Genome Browser, illustrate Genoviz SDK capabilities. Separation between graphics components and genomic data models makes it easy for developers to add visualization capability to pre-existing applications or build new applications using third-party data models. Source code, documentation, sample applications, and tutorials are available at http://genoviz.sourceforge.net/.
Flow visualization of CFD using graphics workstations
NASA Technical Reports Server (NTRS)
Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon
1987-01-01
High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.
Paek, Hye-Jin; Kim, Kyongseok; Hove, Thomas
2010-12-01
Focusing on several message features that are prominent in antismoking campaign literature, this content-analytic study examines 934 antismoking video clips on YouTube for the following characteristics: message sensation value (MSV) and three types of message appeal (threat, social and humor). These four characteristics are then linked to YouTube's interactive audience response mechanisms (number of viewers, viewer ratings and number of comments) to capture message reach, viewer preference and viewer engagement. The findings suggest the following: (i) antismoking messages are prevalent on YouTube, (ii) MSV levels of online antismoking videos are relatively low compared with MSV levels of televised antismoking messages, (iii) threat appeals are the videos' predominant message strategy and (iv) message characteristics are related to viewer reach and viewer preference.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-30
... Television Signals Pursuant to the Satellite Home Viewer Extension and Reauthorization Act of 2004 AGENCY... Satellite Home Viewer Extension Act of 2004. The information collection requirements were approved on June... Measurement Standards for Digital Television Signals pursuant to the Satellite Home Viewer Extension and...
Partially converted stereoscopic images and the effects on visual attention and memory
NASA Astrophysics Data System (ADS)
Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Kawai, Takashi; Watanabe, Katsumi
2015-03-01
This study contained two experimental examinations of the cognitive activities such as visual attention and memory in viewing stereoscopic (3D) images. For this study, partially converted 3D images were used with binocular parallax added to a specific region of the image. In Experiment 1, change blindness was used as a presented stimulus. The visual attention and impact on memory were investigated by measuring the response time to accomplish the given task. In the change blindness task, an 80 ms blank was intersected between the original and altered images, and the two images were presented alternatingly for 240 ms each. Subjects were asked to temporarily memorize the two switching images and to compare them, visually recognizing the difference between the two. The stimuli for four conditions (2D, 3D, Partially converted 3D, distracted partially converted 3D) were randomly displayed for 20 subjects. The results of Experiment 1 showed that partially converted 3D images tend to attract visual attention and are prone to remain in viewer's memory in the area where moderate negative parallax has been added. In order to examine the impact of a dynamic binocular disparity on partially converted 3D images, an evaluation experiment was conducted that applied learning, distraction, and recognition tasks for 33 subjects. The learning task involved memorizing the location of cells in a 5 × 5 matrix pattern using two different colors. Two cells were positioned with alternating colors, and one of the gray cells was moved up, down, left, or right by one cell width. Experimental conditions was set as a partially converted 3D condition in which a gray cell moved diagonally for a certain period of time with a dynamic binocular disparity added, a 3D condition in which binocular disparity was added to all gray cells, and a 2D condition. The correct response rates for recognition of each task after the distraction task were compared. The results of Experiment 2 showed that the correct response rate in the partial 3D condition was significantly higher with the recognition task than in the other conditions. These results showed that partially converted 3D images tended to have a visual attraction and affect viewer's memory.
3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models
NASA Astrophysics Data System (ADS)
Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.
2013-07-01
Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.
Health Vlogger-Viewer Interaction in Chronic Illness Management
Liu, Leslie S.; Huh, Jina; Neogi, Tina; Inkpen, Kori; Pratt, Wanda
2014-01-01
Health video blogs (vlogs) allow individuals with chronic illnesses to share their stories, experiences, and knowledge with the general public. Furthermore, health vlogs help in creating a connection between the vlogger and the viewers. In this work, we present a qualitative study examining the various methods that health vloggers use to establish a connection with their viewers. We found that vloggers used genres to express specific messages to their viewers while using the uniqueness of video to establish a deeper connection with their viewers. Health vloggers also explicitly sought interaction with their viewers. Based on these results, we present design implications to help facilitate and build sustainable communities for vloggers. PMID:24634895
Media and the making of scientists
NASA Astrophysics Data System (ADS)
O'Keeffe, Moira
This dissertation explores how scientists and science students respond to fictional, visual media about science. I consider how scientists think about images of science in relation to their own career paths from childhood onwards. I am especially interested in the possibility that entertainment media can inspire young people to learn about science. Such inspiration is badly needed, as schools are failing to provide it. Science education in the United States is in a state of crisis. Studies repeatedly find low levels of science literacy in the U.S. This bleak situation exists during a boom in the popularity of science-oriented television shows and science fiction movies. How might entertainment media play a role in helping young people engage with science? To grapple with these questions, I interviewed a total of fifty scientists and students interested in science careers, representing a variety of scientific fields and demographic backgrounds, and with varying levels of interest in science fiction. Most respondents described becoming attracted to the sciences at a young age, and many were able to identify specific sources for this interest. The fact that interest in the sciences begins early in life, demonstrates a potentially important role for fictional media in the process of inspiration, perhaps especially for children without access to real-life scientists. One key aspect to the appeal of fiction about science is how scientists are portrayed as characters. Scientists from groups traditionally under-represented in the sciences often sought out fictional characters with whom they could identify, and viewers from all backgrounds preferred well-rounded characters to the extreme stereotypes of mad or dorky scientists. Genre is another aspect of appeal. Some respondents identified a specific role for science fiction: conveying a sense of wonder. Visual media introduce viewers to the beauty of science. Special effects, in particular, allow viewers to explore the unknown. Advocates of informal science learning initiatives suggest that media can be used as a tool for teaching science content. The potential of entertainment media to provide a sense of wonder is a powerful aspect of its potential to inspire the next generation of scientists.
Television Sex Roles in the 1980s: Do Viewers' Sex and Sex Role Orientation Change the Picture?
ERIC Educational Resources Information Center
Dambrot, Faye H.; And Others
1988-01-01
Investigates the viewer perceptions of female and male television characters as a result of viewer sex and sex role orientation, based on the responses of 677 young adults to the Personal Attributes Questionnaire (PAQ). Viewer gender had an impact on the rating of female characters. (FMW)
Connecting Art and the Brain: An Artist's Perspective on Visual Indeterminacy
Pepperell, Robert
2011-01-01
In this article I will discuss the intersection between art and neuroscience from the perspective of a practicing artist. I have collaborated on several scientific studies into the effects of art on the brain and behavior, looking in particular at the phenomenon of “visual indeterminacy.” This is a perceptual state in which subjects fail to recognize objects from visual cues. I will look at the background to this phenomenon, and show how various artists have exploited its effect through the history of art. My own attempts to create indeterminate images will be discussed, including some of the technical problems I faced in trying to manipulate the viewer's perceptual state through paintings. Visual indeterminacy is not widely studied in neuroscience, although references to it can be found in the literature on visual agnosia and object recognition. I will briefly review some of this work and show how my attempts to understand the science behind visual indeterminacy led me to collaborate with psychophysicists and neuroscientists. After reviewing this work, I will discuss the conclusions I have drawn from its findings and consider the problem of how best to integrate neuroscientific methods with artistic knowledge to create truly interdisciplinary approach. PMID:21887141
Solid object visualization of 3D ultrasound data
NASA Astrophysics Data System (ADS)
Nelson, Thomas R.; Bailey, Michael J.
2000-04-01
Visualization of volumetric medical data is challenging. Rapid-prototyping (RP) equipment producing solid object prototype models of computer generated structures is directly applicable to visualization of medical anatomic data. The purpose of this study was to develop methods for transferring 3D Ultrasound (3DUS) data to RP equipment for visualization of patient anatomy. 3DUS data were acquired using research and clinical scanning systems. Scaling information was preserved and the data were segmented using threshold and local operators to extract features of interest, converted from voxel raster coordinate format to a set of polygons representing an iso-surface and transferred to the RP machine to create a solid 3D object. Fabrication required 30 to 60 minutes depending on object size and complexity. After creation the model could be touched and viewed. A '3D visualization hardcopy device' has advantages for conveying spatial relations compared to visualization using computer display systems. The hardcopy model may be used for teaching or therapy planning. Objects may be produced at the exact dimension of the original object or scaled up (or down) to facilitate matching the viewers reference frame more optimally. RP models represent a useful means of communicating important information in a tangible fashion to patients and physicians.
SNPmplexViewer--toward a cost-effective traceability system
2011-01-01
Background Beef traceability has become mandatory in many regions of the world and is typically achieved through the use of unique numerical codes on ear tags and animal passports. DNA-based traceability uses the animal's own DNA code to identify it and the products derived from it. Using SNaPshot, a primer-extension-based method, a multiplex of 25 SNPs in a single reaction has been practiced for reducing the expense of genotyping a panel of SNPs useful for identity control. Findings To further decrease SNaPshot's cost, we introduced the Perl script SNPmplexViewer, which facilitates the analysis of trace files for reactions performed without the use of fluorescent size standards. SNPmplexViewer automatically aligns reference and target trace electropherograms, run with and without fluorescent size standards, respectively. SNPmplexViewer produces a modified target trace file containing a normalised trace in which the reference size standards are embedded. SNPmplexViewer also outputs aligned images of the two electropherograms together with a difference profile. Conclusions Modified trace files generated by SNPmplexViewer enable genotyping of SnaPshot reactions performed without fluorescent size standards, using common fragment-sizing software packages. SNPmplexViewer's normalised output may also improve the genotyping software's performance. Thus, SNPmplexViewer is a general free tool enabling the reduction of SNaPshot's cost as well as the fast viewing and comparing of trace electropherograms for fragment analysis. SNPmplexViewer is available at http://cowry.agri.huji.ac.il/cgi-bin/SNPmplexViewer.cgi. PMID:21600063
Yakami, Masahiro; Yamamoto, Akira; Yanagisawa, Morio; Sekiguchi, Hiroyuki; Kubo, Takeshi; Togashi, Kaori
2013-06-01
The purpose of this study is to verify objectively the rate of slice omission during paging on picture archiving and communication system (PACS) viewers by recording the images shown on the computer displays of these viewers with a high-speed movie camera. This study was approved by the institutional review board. A sequential number from 1 to 250 was superimposed on each slice of a series of clinical Digital Imaging and Communication in Medicine (DICOM) data. The slices were displayed using several DICOM viewers, including in-house developed freeware and clinical PACS viewers. The freeware viewer and one of the clinical PACS viewers included functions to prevent slice dropping. The series was displayed in stack mode and paged in both automatic and manual paging modes. The display was recorded with a high-speed movie camera and played back at a slow speed to check whether slices were dropped. The paging speeds were also measured. With a paging speed faster than half the refresh rate of the display, some viewers dropped up to 52.4 % of the slices, while other well-designed viewers did not, if used with the correct settings. Slice dropping during paging was objectively confirmed using a high-speed movie camera. To prevent slice dropping, the viewer must be specially designed for the purpose and must be used with the correct settings, or the paging speed must be slower than half of the display refresh rate.
The Worldviews Network: Transformative Global Change Education in Immersive Environments
NASA Astrophysics Data System (ADS)
Hamilton, H.; Yu, K. C.; Gardiner, N.; McConville, D.; Connolly, R.; "Irving, Lindsay", L. S.
2011-12-01
Our modern age is defined by an astounding capacity to generate scientific information. From DNA to dark matter, human ingenuity and technologies create an endless stream of data about ourselves and the world of which we are a part. Yet we largely founder in transforming information into understanding, and understanding into rational action for our society as a whole. Earth and biodiversity scientists are especially frustrated by this impasse because the data they gather often point to a clash between Earth's capacity to sustain life and the decisions that humans make to garner the planet's resources. Immersive virtual environments offer an underexplored link in the translation of scientific data into public understanding, dialogue, and action. The Worldviews Network is a collaboration of scientists, artists, and educators focused on developing best practices for the use of immersive environments for science-based ecological literacy education. A central tenet of the Worldviews Network is that there are multiple ways to know and experience the world, so we are developing scientifically accurate, geographically relevant, and culturally appropriate programming to promote ecological literacy within informal science education programs across the United States. The goal of Worldviews Network is to offer transformative learning experiences, in which participants are guided on a process integrating immersive visual explorations, critical reflection and dialogue, and design-oriented approaches to action - or more simply, seeing, knowing, and doing. Our methods center on live presentations, interactive scientific visualizations, and sustainability dialogues hosted at informal science institutions. Our approach uses datasets from the life, Earth, and space sciences to illuminate the complex conditions that support life on earth and the ways in which ecological systems interact. We are leveraging scientific data from federal agencies, non-governmental organizations, and our own research to develop a library of immersive visualization stories and templates that explore ecological relationships across time at cosmic, global, and bioregional scales, with learning goals aligned to climate and earth science literacy principles. These experiential narratives are used to increase participants' awareness of global change issues as well as to engage them in dialogues and design processes focused on steps they can take within their own communities to systemically address these interconnected challenges. More than 600 digital planetariums in the U.S. collectively represent a pioneering opportunity for distributing Earth systems messages over large geographic areas. By placing the viewer-and Earth itself-within the context of the rest of the universe, digital planetariums can uniquely provide essential transcalar perspectives on the complex interdependencies of Earth's interacting physical and biological systems. The Worldviews Network is creating innovative, data-driven approaches for engaging the American public in dialogues about human-induced global changes.
VPV--The velocity profile viewer user manual
Donovan, John M.
2004-01-01
The Velocity Profile Viewer (VPV) is a tool for visualizing time series of velocity profiles developed by the U.S. Geological Survey (USGS). The USGS uses VPV to preview and present measured velocity data from acoustic Doppler current profilers and simulated velocity data from three-dimensional estuarine, river, and lake hydrodynamic models. The data can be viewed as an animated three-dimensional profile or as a stack of time-series graphs that each represents a location in the water column. The graphically displayed data are shown at each time step like frames of animation. The animation can play at several different speeds or can be suspended on one frame. The viewing angle and time can be manipulated using mouse interaction. A number of options control the appearance of the profile and the graphs. VPV cannot edit or save data, but it can create a Post-Script file showing the velocity profile in three dimensions. This user manual describes how to use each of these features. VPV is available and can be downloaded for free from the World Wide Web at http://ca.water.usgs.gov/program/sfbay/vpv.
Web-based visualisation and analysis of 3D electron-microscopy data from EMDB and PDB.
Lagerstedt, Ingvar; Moore, William J; Patwardhan, Ardan; Sanz-García, Eduardo; Best, Christoph; Swedlow, Jason R; Kleywegt, Gerard J
2013-11-01
The Protein Data Bank in Europe (PDBe) has developed web-based tools for the visualisation and analysis of 3D electron microscopy (3DEM) structures in the Electron Microscopy Data Bank (EMDB) and Protein Data Bank (PDB). The tools include: (1) a volume viewer for 3D visualisation of maps, tomograms and models, (2) a slice viewer for inspecting 2D slices of tomographic reconstructions, and (3) visual analysis pages to facilitate analysis and validation of maps, tomograms and models. These tools were designed to help non-experts and experts alike to get some insight into the content and assess the quality of 3DEM structures in EMDB and PDB without the need to install specialised software or to download large amounts of data from these archives. The technical challenges encountered in developing these tools, as well as the more general considerations when making archived data available to the user community through a web interface, are discussed. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Gibboni, Robert R; Zimmerman, Prisca E; Gothard, Katalin M
2009-01-01
Scanpaths (the succession of fixations and saccades during spontaneous viewing) contain information about the image but also about the viewer. To determine the viewer-dependent factors in the scanpaths of monkeys, we trained three adult males (Macaca mulatta) to look for 3 s at images of conspecific facial expressions with either direct or averted gaze. The subjects showed significant differences on four basic scanpath parameters (number of fixations, fixation duration, saccade length, and total scanpath length) when viewing the same facial expression/gaze direction combinations. Furthermore, we found differences between monkeys in feature preference and in the temporal order in which features were visited on different facial expressions. Overall, the between-subject variability was larger than the within- subject variability, suggesting that scanpaths reflect individual preferences in allocating visual attention to various features in aggressive, neutral, and appeasing facial expressions. Individual scanpath characteristics were brought into register with the genotype for the serotonin transporter regulatory gene (5-HTTLPR) and with behavioral characteristics such as expression of anticipatory anxiety and impulsiveness/hesitation in approaching food in the presence of a potentially dangerous object.
Viewpoint Dependent Imaging: An Interactive Stereoscopic Display
NASA Astrophysics Data System (ADS)
Fisher, Scott
1983-04-01
Design and implementation of a viewpoint Dependent imaging system is described. The resultant display is an interactive, lifesize, stereoscopic image. that becomes a window into a three dimensional visual environment. As the user physically changes his viewpoint of the represented data in relation to the display surface, the image is continuously updated. The changing viewpoints are retrieved from a comprehensive, stereoscopic image array stored on computer controlled, optical videodisc and fluidly presented. in coordination with the viewer's, movements as detected by a body-tracking device. This imaging system is an attempt to more closely represent an observers interactive perceptual experience of the visual world by presenting sensory information cues not offered by traditional media technologies: binocular parallax, motion parallax, and motion perspective. Unlike holographic imaging, this display requires, relatively low bandwidth.
Weighted-MSE based on saliency map for assessing video quality of H.264 video streams
NASA Astrophysics Data System (ADS)
Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.
2011-01-01
Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.
A predictor-corrector technique for visualizing unsteady flow
NASA Technical Reports Server (NTRS)
Banks, David C.; Singer, Bart A.
1995-01-01
We present a method for visualizing unsteady flow by displaying its vortices. The vortices are identified by using a vorticity-predictor pressure-corrector scheme that follows vortex cores. The cross-sections of a vortex at each point along the core can be represented by a Fourier series. A vortex can be faithfully reconstructed from the series as a simple quadrilateral mesh, or its reconstruction can be enhanced to indicate helical motion. The mesh can reduce the representation of the flow features by a factor of one thousand or more compared with the volumetric dataset. With this amount of reduction it is possible to implement an interactive system on a graphics workstation to permit a viewer to examine, in three dimensions, the evolution of the vortical structures in a complex, unsteady flow.
Comparative analysis and visualization of multiple collinear genomes
2012-01-01
Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897
Arachne—A web-based event viewer for MINERνA
NASA Astrophysics Data System (ADS)
Tagg, N.; Brangham, J.; Chvojka, J.; Clairemont, M.; Day, M.; Eberly, B.; Felix, J.; Fields, L.; Gago, A. M.; Gran, R.; Harris, D. A.; Kordosky, M.; Lee, H.; Maggi, G.; Maher, E.; Mann, W. A.; Marshall, C. M.; McFarland, K. S.; McGowan, A. M.; Mislivec, A.; Mousseau, J.; Osmanov, B.; Osta, J.; Paolone, V.; Perdue, G.; Ransome, R. D.; Ray, H.; Schellman, H.; Schmitz, D. W.; Simon, C.; Solano Salinas, C. J.; Tice, B. G.; Walding, J.; Walton, T.; Wolcott, J.; Zhang, D.; Ziemer, B. P.; MinerνA Collaboration
2012-06-01
Neutrino interaction events in the MINERνA detector are visually represented with a web-based tool called Arachne. Data are retrieved from a central server via AJAX, and client-side JavaScript draws images into the user's browser window using the draft HTML 5 standard. These technologies allow neutrino interactions to be viewed by anyone with a web browser, allowing for easy hand-scanning of particle interactions. Arachne has been used in MINERνA to evaluate neutrino data in a prototype detector, to tune reconstruction algorithms, and for public outreach and education.
Arachne - A web-based event viewer for MINERvA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tagg, N.; /Otterbein Coll.; Brangham, J.
2011-11-01
Neutrino interaction events in the MINERvA detector are visually represented with a web-based tool called Arachne. Data are retrieved from a central server via AJAX, and client-side JavaScript draws images into the user's browser window using the draft HTML 5 standard. These technologies allow neutrino interactions to be viewed by anyone with a web browser, allowing for easy hand-scanning of particle interactions. Arachne has been used in MINERvA to evaluate neutrino data in a prototype detector, to tune reconstruction algorithms, and for public outreach and education.
pileup.js: a JavaScript library for interactive and in-browser visualization of genomic data.
Vanderkam, Dan; Aksoy, B Arman; Hodes, Isaac; Perrone, Jaclyn; Hammerbacher, Jeff
2016-08-01
P: ileup.js is a new browser-based genome viewer. It is designed to facilitate the investigation of evidence for genomic variants within larger web applications. It takes advantage of recent developments in the JavaScript ecosystem to provide a modular, reliable and easily embedded library. The code and documentation for pileup.js is publicly available at https://github.com/hammerlab/pileup.js under the Apache 2.0 license. correspondence@hammerlab.org. © The Author 2016. Published by Oxford University Press.
Neural correlates of risk perception during real-life risk communication.
Schmälzle, Ralf; Häcker, Frank; Renner, Britta; Honey, Christopher J; Schupp, Harald T
2013-06-19
During global health crises, such as the recent H1N1 pandemic, the mass media provide the public with timely information regarding risk. To obtain new insights into how these messages are received, we measured neural data while participants, who differed in their preexisting H1N1 risk perceptions, viewed a TV report about H1N1. Intersubject correlation (ISC) of neural time courses was used to assess how similarly the brains of viewers responded to the TV report. We found enhanced intersubject correlations among viewers with high-risk perception in the anterior cingulate, a region which classical fMRI studies associated with the appraisal of threatening information. By contrast, neural coupling in sensory-perceptual regions was similar for the high and low H1N1-risk perception groups. These results demonstrate a novel methodology for understanding how real-life health messages are processed in the human brain, with particular emphasis on the role of emotion and differences in risk perceptions.
Neural Correlates of Risk Perception during Real-Life Risk Communication
Häcker, Frank; Renner, Britta; Honey, Christopher J.; Schupp, Harald T.
2013-01-01
During global health crises, such as the recent H1N1 pandemic, the mass media provide the public with timely information regarding risk. To obtain new insights into how these messages are received, we measured neural data while participants, who differed in their preexisting H1N1 risk perceptions, viewed a TV report about H1N1. Intersubject correlation (ISC) of neural time courses was used to assess how similarly the brains of viewers responded to the TV report. We found enhanced intersubject correlations among viewers with high-risk perception in the anterior cingulate, a region which classical fMRI studies associated with the appraisal of threatening information. By contrast, neural coupling in sensory-perceptual regions was similar for the high and low H1N1-risk perception groups. These results demonstrate a novel methodology for understanding how real-life health messages are processed in the human brain, with particular emphasis on the role of emotion and differences in risk perceptions. PMID:23785147
NASA Astrophysics Data System (ADS)
Shaya, E.; Kargatis, V.; Blackwell, J.; Borne, K.; White, R. A.; Cheung, C.
1998-05-01
Several new web based services have been introduced this year by the Astrophysics Data Facility (ADF) at the NASA Goddard Space Flight Center. IMPReSS is a graphical interface to astrophysics databases that presents the user with the footprints of observations of space-based missions. It also aids astronomers in retrieving these data by sending requests to distributed data archives. The VIEWER is a reader of ADC astronomical catalogs and journal tables that allows subsetting of catalogs by column choices and range selection and provides database-like search capability within each table. With it, the user can easily find the table data most appropriate for their purposes and then download either the subset table or the original table. CATSEYE is a tool that plots output tables from the VIEWER (and soon AMASE), making exploring the datasets fast and easy. Having completed the basic functionality of these systems, we are enhancing the site to provide advanced functionality. These will include: market basket storage of tables and records of VIEWER output for IMPReSS and AstroBrowse queries, non-HTML table responses to AstroBrowse type queries, general column arithmetic, modularity to allow entrance into the sequence of web pages at any point, histogram plots, navigable maps, and overplotting of catalog objects on mission footprint maps. When completed, the ADF/ADC web facilities will provide astronomical tabled data and mission retrieval information in several hyperlinked environments geared for users at any level, from the school student to the typical astronomer to the expert datamining tools at state-of-the-art data centers.
svviz: a read viewer for validating structural variants.
Spies, Noah; Zook, Justin M; Salit, Marc; Sidow, Arend
2015-12-15
Visualizing read alignments is the most effective way to validate candidate structural variants (SVs) with existing data. We present svviz, a sequencing read visualizer for SVs that sorts and displays only reads relevant to a candidate SV. svviz works by searching input bam(s) for potentially relevant reads, realigning them against the inferred sequence of the putative variant allele as well as the reference allele and identifying reads that match one allele better than the other. Separate views of the two alleles are then displayed in a scrollable web browser view, enabling a more intuitive visualization of each allele, compared with the single reference genome-based view common to most current read browsers. The browser view facilitates examining the evidence for or against a putative variant, estimating zygosity, visualizing affected genomic annotations and manual refinement of breakpoints. svviz supports data from most modern sequencing platforms. svviz is implemented in python and freely available from http://svviz.github.io/. Published by Oxford University Press 2015. This work is written by US Government employees and is in the public domain in the US.
Using Tablet for visual exploration of second-generation sequencing data.
Milne, Iain; Stephen, Gordon; Bayer, Micha; Cock, Peter J A; Pritchard, Leighton; Cardle, Linda; Shaw, Paul D; Marshall, David
2013-03-01
The advent of second-generation sequencing (2GS) has provided a range of significant new challenges for the visualization of sequence assemblies. These include the large volume of data being generated, short-read lengths and different data types and data formats associated with the diversity of new sequencing technologies. This article illustrates how Tablet-a high-performance graphical viewer for visualization of 2GS assemblies and read mappings-plays an important role in the analysis of these data. We present Tablet, and through a selection of use cases, demonstrate its value in quality assurance and scientific discovery, through features such as whole-reference coverage overviews, variant highlighting, paired-end read mark-up, GFF3-based feature tracks and protein translations. We discuss the computing and visualization techniques utilized to provide a rich and responsive graphical environment that enables users to view a range of file formats with ease. Tablet installers can be freely downloaded from http://bioinf.hutton.ac.uk/tablet in 32 or 64-bit versions for Windows, OS X, Linux or Solaris. For further details on the Tablet, contact tablet@hutton.ac.uk.
A methodology for coupling a visual enhancement device to human visual attention
NASA Astrophysics Data System (ADS)
Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman
2009-02-01
The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.
Pictorial communication: Pictures and the synthetic universe
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
1989-01-01
Principles for the design of dynamic spatial instruments for communicating quantitative information to viewers are considered through a brief review of the history of pictorial communication. Pictorial communication is seen to have two directions: (1) from the picture to the viewer; and (2) from the viewer to the picture. Optimization of the design of interactive instruments using pictorial formats requires an understanding of the manipulative, perceptual, and cognitive limitations of human viewers.
Kim, Sung-Min
2018-01-01
Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model), a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil) from being polluted by acid mine drainage. PMID:29747480
First-time viewers' comprehension of films: bridging shot transitions.
Ildirar, Sermin; Schwan, Stephan
2015-02-01
Which perceptual and cognitive prerequisites must be met in order to be able to comprehend a film is still unresolved and a controversial issue. In order to gain some insights into this issue, our field experiment investigates how first-time adult viewers extract and integrate meaningful information across film cuts. Three major types of commonalities between adjacent shots were differentiated, which may help first-time viewers with bridging the shots: pictorial, causal, and conceptual. Twenty first-time, 20 low-experienced and 20 high-experienced viewers from Turkey were shown a set of short film clips containing these three kinds of commonalities. Film clips conformed also to the principles of continuity editing. Analyses of viewers' spontaneous interpretations show that first-time viewers indeed are able to notice basic pictorial (object identity), causal (chains of activity), as well as conceptual (links between gaze direction and object attention) commonalities between shots due to their close relationship with everyday perception and cognition. However, first-time viewers' comprehension of the commonalities is to a large degree fragile, indicating the lack of a basic notion of what constitutes a film. © 2014 The British Psychological Society.
A visual salience map in the primate frontal eye field.
Thompson, Kirk G; Bichot, Narcisse P
2005-01-01
Models of attention and saccade target selection propose that within the brain there is a topographic map of visual salience that combines bottom-up and top-down influences to identify locations for further processing. The results of a series of experiments with monkeys performing visual search tasks have identified a population of frontal eye field (FEF) visually responsive neurons that exhibit all of the characteristics of a visual salience map. The activity of these FEF neurons is not sensitive to specific features of visual stimuli; but instead, their activity evolves over time to select the target of the search array. This selective activation reflects both the bottom-up intrinsic conspicuousness of the stimuli and the top-down knowledge and goals of the viewer. The peak response within FEF specifies the target for the overt gaze shift. However, the selective activity in FEF is not in itself a motor command because the magnitude of activation reflects the relative behavioral significance of the different stimuli in the visual scene and occurs even when no saccade is made. Identifying a visual salience map in FEF validates the theoretical concept of a salience map in many models of attention. In addition, it strengthens the emerging view that FEF is not only involved in producing overt gaze shifts, but is also important for directing covert spatial attention.
Stereoscopic visual fatigue assessment and modeling
NASA Astrophysics Data System (ADS)
Wang, Danli; Wang, Tingting; Gong, Yue
2014-03-01
Evaluation of stereoscopic visual fatigue is one of the focuses in the user experience research. It is measured in either subjective or objective methods. Objective measures are more preferred for their capability to quantify the degree of human visual fatigue without being affected by individual variation. However, little research has been conducted on the integration of objective indicators, or the sensibility of each objective indicator in reflecting subjective fatigue. The paper proposes a simply effective method to evaluate visual fatigue more objectively. The stereoscopic viewing process is divided into series of sessions, after each of which viewers rate their visual fatigue with subjective scores (SS) according to a five-grading scale, followed by tests of the punctum maximum accommodation (PMA) and visual reaction time (VRT). Throughout the entire viewing process, their eye movements are recorded by an infrared camera. The pupil size (PS) and percentage of eyelid closure over the pupil over time (PERCLOS) are extracted from the videos processed by the algorithm. Based on the method, an experiment with 14 subjects was conducted to assess visual fatigue induced by 3D images on polarized 3D display. The experiment consisted of 10 sessions (5min per session), each containing the same 75 images displayed randomly. The results show that PMA, VRT and PERCLOS are the most efficient indicators of subjective visual fatigue and finally a predictive model is derived from the stepwise multiple regressions.
A new multimodal interactive way of subjective scoring of 3D video quality of experience
NASA Astrophysics Data System (ADS)
Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.
2014-03-01
People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.
NASA Astrophysics Data System (ADS)
Meertens, C. M.; Murray, D.; McWhirter, J.
2004-12-01
Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and control the visualization. A Jython-based formulation facility allows computations on disparate data sets using simple formulas. Although the IDV is an advanced tool for research, its flexible architecture has also been exploited for educational purposes with the Virtual Geophysical Exploration Environment (VGEE) development. The VGEE demonstration added physical concept models to the IDV and curricula for atmospheric science education intended for the high school to graduate student levels.
Draenert, F G; Gebhart, F; Berthold, M; Gosau, M; Wagner, W
2010-07-01
The objective of this study was to determine the ability of two flat panel cone beam CT (CBCT) devices to identify demineralized bone and bone transplants in vivo and in vitro. Datasets from patients with autologous bone grafts (n = 9, KaVo 3DeXam (KaVo, Biberach, Germany); n = 38, Accuitomo 40 (Morita, Osaka, Japan)) were retrospectively evaluated. Demineralized and non-demineralized porcine cancellous bone blocks were examined with the two CBCT devices. A SawBone skull (Pacific Research Laboratories, Vashon, WA) was used as a positioning tool for the bone blocks. Descriptive evaluation and image quality assessment were conducted on the KaVo 3DeXam data (voxel size 0.3 mm) using the OsiriX viewer as well as on the Morita Accuitomo data (voxel size 0.25 mm) using proprietary viewer software. Both in vivo and in vitro, the descriptive analysis of the images of the two devices showed well-visualized bone transplants with clearly defined cancellous bones and well-defined single bone trabeculae in all cross-sections. In vitro, demineralized samples showed lower radiographic opacity but no significant loss of quality compared with fresh bone (P = 0.070). Single cancellous bone trabeculae were significantly better visualized with the Morita 3D Accuitomo device than with the KaVo 3DeXam device (P = 0.038). Both the KaVo 3DeXam and Morita 3D Accuitomo devices produce good-quality images of cancellous bones in in vivo remodelling as well as after in vitro demineralization.
The processing of linear perspective and binocular information for action and perception.
Bruggeman, Hugo; Yonas, Albert; Konczak, Jürgen
2007-04-08
To investigate the processing of linear perspective and binocular information for action and for the perceptual judgment of depth, we presented viewers with an actual Ames trapezoidal window. The display, when presented perpendicular to the line of sight, provided perspective information for a rectangular window slanted in depth, while binocular information specified a planar surface in the fronto-parallel plane. We compared pointing towards the display-edges with perceptual judgment of their positions in depth as the display orientation was varied under monocular and binocular view. On monocular trials, pointing and depth judgment were based on the perspective information and failed to respond accurately to changes in display orientation because pictorial information did not vary sufficiently to specify the small differences in orientation. For binocular trials, pointing was based on binocular information and precisely matched the changes in display orientation whereas depth judgment was short of such adjustment and based upon both binocular and perspective-specified slant information. The finding, that on binocular trials pointing was considerably less responsive to the illusion than perceptual judgment, supports an account of two separate processing streams in the human visual system, a ventral pathway involved in object recognition and a dorsal pathway that produces visual information for the control of actions. Previously, similar differences between perception and action were explained by an alternate explanation, that is, viewers selectively attend to different parts of a display in the two tasks. The finding that under monocular view participants responded to perspective information in both the action and the perception task rules out the attention-based argument.
Phua, Joe; Tinkham, Spencer
2016-01-01
This study examined the joint influence of spokesperson type in obesity public service announcements (PSAs) and viewer weight on diet intention, exercise intention, information seeking, and electronic word-of-mouth (eWoM) intention. Results of a 2 (spokesperson type: real person vs. actor) × 2 (viewer weight: overweight vs. non-overweight) between-subjects experiment indicated that overweight viewers who saw the PSA featuring the real person had the highest diet intention, exercise intention, information seeking, and eWoM intention. Parasocial interaction was also found to mediate the relationships between spokesperson type/viewer weight and two of the dependent variables: diet intention and exercise intention. In addition, viewers who saw the PSA featuring the real person rated the spokesperson as significantly higher on source credibility (trustworthiness, competence, and goodwill) than those who saw the PSA featuring the actor.
Web-based visual analysis for high-throughput genomics
2013-01-01
Background Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. Results We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Conclusions Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput genomics experiments. PMID:23758618
Head-mounted spatial instruments II: Synthetic reality or impossible dream
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Grunwald, Arthur
1989-01-01
A spatial instrument is defined as a spatial display which has been either geometrically or symbolically enhanced to enable a user to accomplish a particular task. Research conducted over the past several years on 3-D spatial instruments has shown that perspective displays, even when viewed from the correct viewpoint, are subject to systematic viewer biases. These biases interfere with correct spatial judgements of the presented pictorial information. The design of spatial instruments may not only require the introduction of compensatory distortions to remove the naturally occurring biases but also may significantly benefit from the introduction of artificial distortions which enhance performance. However, these image manipulations can cause a loss of visual-vestibular coordination and induce motion sickness. Consequently, the design of head-mounted spatial instruments will require an understanding of the tolerable limits of visual-vestibular discord.
Trajectory Browser: An Online Tool for Interplanetary Trajectory Analysis and Visualization
NASA Technical Reports Server (NTRS)
Foster, Cyrus James
2013-01-01
The trajectory browser is a web-based tool developed at the NASA Ames Research Center for finding preliminary trajectories to planetary bodies and for providing relevant launch date, time-of-flight and (Delta)V requirements. The site hosts a database of transfer trajectories from Earth to planets and small-bodies for various types of missions such as rendezvous, sample return or flybys. A search engine allows the user to find trajectories meeting desired constraints on the launch window, mission duration and (Delta)V capability, while a trajectory viewer tool allows the visualization of the heliocentric trajectory and the detailed mission itinerary. The anticipated user base of this tool consists primarily of scientists and engineers designing interplanetary missions in the context of pre-phase A studies, particularly for performing accessibility surveys to large populations of small-bodies.
Naturalness and interestingness of test images for visual quality evaluation
NASA Astrophysics Data System (ADS)
Halonen, Raisa; Westman, Stina; Oittinen, Pirkko
2011-01-01
Balanced and representative test images are needed to study perceived visual quality in various application domains. This study investigates naturalness and interestingness as image quality attributes in the context of test images. Taking a top-down approach we aim to find the dimensions which constitute naturalness and interestingness in test images and the relationship between these high-level quality attributes. We compare existing collections of test images (e.g. Sony sRGB images, ISO 12640 images, Kodak images, Nokia images and test images developed within our group) in an experiment combining quality sorting and structured interviews. Based on the data gathered we analyze the viewer-supplied criteria for naturalness and interestingness across image types, quality levels and judges. This study advances our understanding of subjective image quality criteria and enables the validation of current test images, furthering their development.
The ALIVE Project: Astronomy Learning in Immersive Virtual Environments
NASA Astrophysics Data System (ADS)
Yu, K. C.; Sahami, K.; Denn, G.
2008-06-01
The Astronomy Learning in Immersive Virtual Environments (ALIVE) project seeks to discover learning modes and optimal teaching strategies using immersive virtual environments (VEs). VEs are computer-generated, three-dimensional environments that can be navigated to provide multiple perspectives. Immersive VEs provide the additional benefit of surrounding a viewer with the simulated reality. ALIVE evaluates the incorporation of an interactive, real-time ``virtual universe'' into formal college astronomy education. In the experiment, pre-course, post-course, and curriculum tests will be used to determine the efficacy of immersive visualizations presented in a digital planetarium versus the same visual simulations in the non-immersive setting of a normal classroom, as well as a control case using traditional classroom multimedia. To normalize for inter-instructor variability, each ALIVE instructor will teach at least one of each class in each of the three test groups.
Global-local visual biases correspond with visual-spatial orientation.
Basso, Michael R; Lowery, Natasha
2004-02-01
Within the past decade, numerous investigations have demonstrated reliable associations of global-local visual processing biases with right and left hemisphere function, respectively (cf. Van Kleeck, 1989). Yet the relevance of these biases to other cognitive functions is not well understood. Towards this end, the present research examined the relationship between global-local visual biases and perception of visual-spatial orientation. Twenty-six women and 23 men completed a global-local judgment task (Kimchi and Palmer, 1982) and the Judgment of Line Orientation Test (JLO; Benton, Sivan, Hamsher, Varney, and Spreen, 1994), a measure of visual-spatial orientation. As expected, men had better performance on JLO. Extending previous findings, global biases were related to better visual-spatial acuity on JLO. The findings suggest that global-local biases and visual-spatial orientation may share underlying cerebral mechanisms. Implications of these findings for other visually mediated cognitive outcomes are discussed.
Using game theory for perceptual tuned rate control algorithm in video coding
NASA Astrophysics Data System (ADS)
Luo, Jiancong; Ahmad, Ishfaq
2005-03-01
This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.
Biologically Inspired Model for Inference of 3D Shape from Texture
Gomez, Olman; Neumann, Heiko
2016-01-01
A biologically inspired model architecture for inferring 3D shape from texture is proposed. The model is hierarchically organized into modules roughly corresponding to visual cortical areas in the ventral stream. Initial orientation selective filtering decomposes the input into low-level orientation and spatial frequency representations. Grouping of spatially anisotropic orientation responses builds sketch-like representations of surface shape. Gradients in orientation fields and subsequent integration infers local surface geometry and globally consistent 3D depth. From the distributions in orientation responses summed in frequency, an estimate of the tilt and slant of the local surface can be obtained. The model suggests how 3D shape can be inferred from texture patterns and their image appearance in a hierarchically organized processing cascade along the cortical ventral stream. The proposed model integrates oriented texture gradient information that is encoded in distributed maps of orientation-frequency representations. The texture energy gradient information is defined by changes in the grouped summed normalized orientation-frequency response activity extracted from the textured object image. This activity is integrated by directed fields to generate a 3D shape representation of a complex object with depth ordering proportional to the fields output, with higher activity denoting larger distance in relative depth away from the viewer. PMID:27649387
A Target Advertisement System Based on TV Viewer's Profile Reasoning
NASA Astrophysics Data System (ADS)
Lim, Jeongyeon; Kim, Munjo; Lee, Bumshik; Kim, Munchurl; Lee, Heekyung; Lee, Han-Kyu
With the rapidly growing Internet, the Internet broadcasting and web casting service have been one of the well-known services. Specially, it is expected that the IPTV service will be one of the principal services in the broadband network [2]. However, the current broadcasting environment is served for the general public and requires the passive attitude to consume the TV programs. For the advanced broadcasting environments, various research of the personalized broadcasting is needed. For example, the current unidirectional advertisement provides to the TV viewers the advertisement contents, depending on the popularity of TV programs, the viewing rates, the age groups of TV viewers, and the time bands of the TV programs being broadcast. It is not an efficient way to provide the useful information to the TV viewers from customization perspective. If a TV viewer does not need particular advertisement contents, then information may be wasteful to the TV viewer. Therefore, it is expected that the target advertisement service will be one of the important services in the personalized broadcasting environments. The current research in the area of the target advertisement classifies the TV viewers into clustered groups who have similar preference. The digital TV collaborative filtering estimates the user's favourite advertisement contents by using the usage history [1, 4, 5]. In these studies, the TV viewers are required to provide their profile information such as the gender, job, and ages to the service providers via a PC or Set-Top Box (STB) which is connected to digital TV. Based on explicit information, the advertisement contents are provided to the TV viewers in a customized way with tailored advertisement contents. However, the TV viewers may dislike exposing to the service providers their private information because of the misuse of it. In this case, it is difficult to provide appropriate target advertisement service.
Heenan, Adam; Troje, Nikolaus F
2014-01-01
Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli.
Heenan, Adam; Troje, Nikolaus F.
2014-01-01
Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli. PMID:24987956
Phased development of a web-based PACS viewer
NASA Astrophysics Data System (ADS)
Gidron, Yoad; Shani, Uri; Shifrin, Mark
2000-05-01
The Web browser is an excellent environment for the rapid development of an effective and inexpensive PACS viewer. In this paper we will share our experience in developing a browser-based viewer, from the inception and prototype stages to its current state of maturity. There are many operational advantages to a browser-based viewer, even when native viewers already exist in the system (with multiple and/or high resolution screens): (1) It can be used on existing personal workstations throughout the hospital. (2) It is easy to make the service available from physician's homes. (3) The viewer is extremely portable and platform independent. There is a wide variety of means available for implementing the browser- based viewer. Each file sent to the client by the server can perform some end-user or client/server interaction. These means range from HTML (for HyperText Markup Language) files, through Java Script, to Java applets. Some data types may also invoke plug-in code in the client, although this would reduce the portability of the viewer, it would provide the needed efficiency in critical places. On the server side the range of means is also very rich: (1) A set of files: html, Java Script, Java applets, etc. (2) Extensions of the server via cgi-bin programs, (3) Extensions of the server via servlets, (4) Any other helper application residing and working with the server to access the DICOM archive. The viewer architecture consists of two basic parts: The first part performs query and navigation through the DICOM archive image folders. The second part does the image access and display. While the first part deals with low data traffic, it involves many database transactions. The second part is simple as far as access transactions are concerned, but requires much more data traffic and display functions. Our web-based viewer has gone through three development stages characterized by the complexity of the means and tools employed on both client and server sides.
Ambroggio, Xavier I; Dommer, Jennifer; Gopalan, Vivek; Dunham, Eleca J; Taubenberger, Jeffery K; Hurt, Darrell E
2013-06-18
Influenza A viruses possess RNA genomes that mutate frequently in response to immune pressures. The mutations in the hemagglutinin genes are particularly significant, as the hemagglutinin proteins mediate attachment and fusion to host cells, thereby influencing viral pathogenicity and species specificity. Large-scale influenza A genome sequencing efforts have been ongoing to understand past epidemics and pandemics and anticipate future outbreaks. Sequencing efforts thus far have generated nearly 9,000 distinct hemagglutinin amino acid sequences. Comparative models for all publicly available influenza A hemagglutinin protein sequences (8,769 to date) were generated using the Rosetta modeling suite. The C-alpha root mean square deviations between a randomly chosen test set of models and their crystallographic templates were less than 2 Å, suggesting that the modeling protocols yielded high-quality results. The models were compiled into an online resource, the Hemagglutinin Structure Prediction (HASP) server. The HASP server was designed as a scientific tool for researchers to visualize hemagglutinin protein sequences of interest in a three-dimensional context. With a built-in molecular viewer, hemagglutinin models can be compared side-by-side and navigated by a corresponding sequence alignment. The models and alignments can be downloaded for offline use and further analysis. The modeling protocols used in the HASP server scale well for large amounts of sequences and will keep pace with expanded sequencing efforts. The conservative approach to modeling and the intuitive search and visualization interfaces allow researchers to quickly analyze hemagglutinin sequences of interest in the context of the most highly related experimental structures, and allow them to directly compare hemagglutinin sequences to each other simultaneously in their two- and three-dimensional contexts. The models and methodology have shown utility in current research efforts and the ongoing aim of the HASP server is to continue to accelerate influenza A research and have a positive impact on global public health.
Arctic Messages: Arctic Research in the Vocabulary of Poets and Artists
NASA Astrophysics Data System (ADS)
Samsel, F.
2017-12-01
Arctic Messages is a series of prints created by a multidisciplinary team designed to build understanding and encourage dialogue about the changing Arctic ecosystems and the impacts on global weather patterns. Our team comprised of Arctic researchers, a poet, a visual artist, photographers and visualization experts set out to blend the vocabularies of our disciplines in order to provide entry into the content for diverse audiences. Arctic Messages is one facet of our broader efforts experimenting with mediums of communication able to provide entry to those of us outside scientific of fields. We believe that the scientific understanding of change presented through the languages art will speak to our humanity as well as our intellect. The prints combine poetry, painting, visualization, and photographs, drawn from the Arctic field studies of the Next Generation Ecosystem Experiments research team at Los Alamos National Laboratory. The artistic team interviewed the scientists, read their papers and poured over their field blogs. The content and concepts are designed to portray the wonder of nature, the complexity of the science and the dedication of the researchers. Smith brings to life the intertwined connection between the research efforts, the ecosystems and the scientist's experience. Breathtaking photography of the research site is accompanied by Samsel's drawings and paintings of the ecosystem relationships and geological formations. Together they provide entry to the variety and wonder of life on the Arctic tundra and that resting quietly in the permafrost below. Our team has experimented with many means of presentation from complex interactive systems to quiet individual works. Here we are presenting a series of prints, each one based on a single thread of the research or the scientist's experience but containing intertwined relationships similar to the ecosystems they represent. Earlier interactive systems, while engaging, were not tuned to those seeking quieter contemplation. The long linear work spreads across the wall enable viewers to explore the content of interest at the pace and through the vocabulary that speaks to them.
Turning Content into Conversation: How The GLOBE Program is Growing its Brand Online
NASA Astrophysics Data System (ADS)
Zwerin, R.; Randolph, J. G.; Andersen, T.; Mackaro, J.; Malmberg, J.; Tessendorf, S. A.; Wegner, K.
2012-12-01
Social Media is now a ubiquitous way for individuals, corporations, governments and communities to communicate. However, the same does not hold quite as true for the science community as many science educators, thought leaders and science programs are either reluctant or unable to build and cultivate a meaningful social media strategy. This presentation will show how The GLOBE Program uses social media to disseminate messages, build a meaningful and engaged following and grow a brand on an international scale using a proprietary Inside-Out strategy that leverages social media platforms such as Facebook, LinkedIn, Twitter, YouTube and Blogs to significantly increase influencers on a worldwide scale. In addition, this poster presentation will be interactive, so viewers will be able to touch and feel the social experience. Moreover, GLOBE representatives will be on hand to talk viewers through how they can implement a social media strategy that will allow them to turn their content into meaningful conversation. About The GLOBE Program: GLOBE is a science and education program that connects a network of students, teachers and scientists from around the world to better understand, sustain and improve Earth's environment at local, regional and global scales. By engaging students in hands-on learning of Earth system science, GLOBE is an innovative way for teachers to get students of all ages excited about scientific discovery locally and globally. To date, more than 23 million measurements have been contributed to the GLOBE database, creating meaningful, standardized, global research-quality data sets that can be used in support of student and professional scientific research. Since beginning operations in 1995, over 58,000 trained teachers and 1.5 million students in 112 countries have participated in GLOBE. For more information or to become involved, visit www.globe.gov.
Fixating at far distance shortens reaction time to peripheral visual stimuli at specific locations.
Kokubu, Masahiro; Ando, Soichi; Oda, Shingo
2018-01-18
The purpose of the present study was to examine whether the fixation distance in real three-dimensional space affects manual reaction time to peripheral visual stimuli. Light-emitting diodes were used for presenting a fixation point and four peripheral visual stimuli. The visual stimuli were located at a distance of 45cm and at 25° in the left, right, upper, and lower directions from the sagittal axis including the fixation point. Near (30cm), Middle (45cm), Far (90cm), and Very Far (300cm) fixation distance conditions were used. When one of the four visual stimuli was randomly illuminated, the participants released a button as quickly as possible. Results showed that overall peripheral reaction time decreased as the fixation distance increased. The significant interaction between fixation distance and stimulus location indicated that the effect of fixation distance on reaction time was observed at the left, right, and upper locations but not at the lower location. These results suggest that fixating at far distance would contribute to faster reaction and that the effect is specific to locations in the peripheral visual field. The present findings are discussed in terms of viewer-centered representation, the focus of attention in depth, and visual field asymmetry related to neurological and psychological aspects. Copyright © 2017 Elsevier B.V. All rights reserved.
Immersive Molecular Visualization with Omnidirectional Stereoscopic Ray Tracing and Remote Rendering
Stone, John E.; Sherman, William R.; Schulten, Klaus
2016-01-01
Immersive molecular visualization provides the viewer with intuitive perception of complex structures and spatial relationships that are of critical interest to structural biologists. The recent availability of commodity head mounted displays (HMDs) provides a compelling opportunity for widespread adoption of immersive visualization by molecular scientists, but HMDs pose additional challenges due to the need for low-latency, high-frame-rate rendering. State-of-the-art molecular dynamics simulations produce terabytes of data that can be impractical to transfer from remote supercomputers, necessitating routine use of remote visualization. Hardware-accelerated video encoding has profoundly increased frame rates and image resolution for remote visualization, however round-trip network latencies would cause simulator sickness when using HMDs. We present a novel two-phase rendering approach that overcomes network latencies with the combination of omnidirectional stereoscopic progressive ray tracing and high performance rasterization, and its implementation within VMD, a widely used molecular visualization and analysis tool. The new rendering approach enables immersive molecular visualization with rendering techniques such as shadows, ambient occlusion lighting, depth-of-field, and high quality transparency, that are particularly helpful for the study of large biomolecular complexes. We describe ray tracing algorithms that are used to optimize interactivity and quality, and we report key performance metrics of the system. The new techniques can also benefit many other application domains. PMID:27747138
Putting reward in art: A tentative prediction error account of visual art
Van de Cruys, Sander; Wagemans, Johan
2011-01-01
The predictive coding model is increasingly and fruitfully used to explain a wide range of findings in perception. Here we discuss the potential of this model in explaining the mechanisms underlying aesthetic experiences. Traditionally art appreciation has been associated with concepts such as harmony, perceptual fluency, and the so-called good Gestalt. We observe that more often than not great artworks blatantly violate these characteristics. Using the concept of prediction error from the predictive coding approach, we attempt to resolve this contradiction. We argue that artists often destroy predictions that they have first carefully built up in their viewers, and thus highlight the importance of negative affect in aesthetic experience. However, the viewer often succeeds in recovering the predictable pattern, sometimes on a different level. The ensuing rewarding effect is derived from this transition from a state of uncertainty to a state of increased predictability. We illustrate our account with several example paintings and with a discussion of art movements and individual differences in preference. On a more fundamental level, our theorizing leads us to consider the affective implications of prediction confirmation and violation. We compare our proposal to other influential theories on aesthetics and explore its advantages and limitations. PMID:23145260
Empathy-Related Responses to Depicted People in Art Works
Kesner, Ladislav; Horáček, Jiří
2017-01-01
Existing theories of empathic response to visual art works postulate the primacy of automatic embodied reaction to images based on mirror neuron mechanisms. Arguing for a more inclusive concept of empathy-related response and integrating four distinct bodies of literature, we discuss contextual, and personal factors which modulate empathic response to depicted people. We then present an integrative model of empathy-related responses to depicted people in art works. The model assumes that a response to empathy-eliciting figural artworks engages the dynamic interaction of two mutually interlinked sets of processes: socio-affective/cognitive processing, related to the person perception, and esthetic processing, primarily concerned with esthetic appreciation and judgment and attention to non-social aspects of the image. The model predicts that the specific pattern of interaction between empathy-related and esthetic processing is co-determined by several sets of factors: (i) the viewer's individual characteristics, (ii) the context variables (which include various modes of priming by narratives and other images), (iii) multidimensional features of the image, and (iv) aspects of a viewer's response. Finally we propose that the model is implemented by the interaction of functionally connected brain networks involved in socio-cognitive and esthetic processing. PMID:28286487
Interactive Streamline Exploration and Manipulation Using Deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Xin; Chen, Chun-Ming; Shen, Han-Wei
2015-01-12
Occlusion presents a major challenge in visualizing three-dimensional flow fields with streamlines. Displaying too many streamlines at once makes it difficult to locate interesting regions, but displaying too few streamlines risks missing important features. A more ideal streamline exploration model is to allow the viewer to freely move across the field that has been populated with interesting streamlines and pull away the streamlines that cause occlusion so that the viewer can inspect the hidden ones in detail. In this paper, we present a streamline deformation algorithm that supports such user-driven interaction with three-dimensional flow fields. We define a view-dependent focus+contextmore » technique that moves the streamlines occluding the focus area using a novel displacement model. To preserve the context surrounding the user-chosen focus area, we propose two shape models to define the transition zone for the surrounding streamlines, and the displacement of the contextual streamlines is solved interactively with a goal of preserving their shapes as much as possible. Based on our deformation model, we design an interactive streamline exploration tool using a lens metaphor. Our system runs interactively so that users can move their focus and examine the flow field freely.« less
Leveraging the power of music to improve science education
NASA Astrophysics Data System (ADS)
Crowther, Gregory J.; McFadden, Tom; Fleming, Jean S.; Davis, Katie
2016-01-01
We assessed the impact of music videos with science-based lyrics on content knowledge and attitudes in a three-part experimental research study of over 1000 participants (mostly K-12 students). In Study A, 13 of 15 music videos were followed by statistically significant improvements on questions about material covered in the videos, while performance on 'bonus questions' not covered by the videos did not improve. Video-specific improvement was observed in both basic knowledge and genuine comprehension (levels 1 and 2 of Bloom's taxonomy, respectively) and after both lyrics-only and visually rich versions of some videos. In Study B, musical versions of additional science videos were not superior to non-musical ones in their immediate impact on content knowledge, though musical versions were significantly more enjoyable. In Study C, a non-musical video on fossils elicited greater immediate test improvement than the musical version ('Fossil Rock Anthem'); however, viewers of the music video enjoyed a modest advantage on a delayed post-test administered 28 days later. Music video viewers more frequently rated their video as 'fun', and seemed more likely to revisit and/or share the video. Our findings contribute to a broader dialogue on promising new pedagogical strategies in science education.
Dynamic publication model for neurophysiology databases.
Gardner, D; Abato, M; Knuth, K H; DeBellis, R; Erde, S M
2001-08-29
We have implemented a pair of database projects, one serving cortical electrophysiology and the other invertebrate neurones and recordings. The design for each combines aspects of two proven schemes for information interchange. The journal article metaphor determined the type, scope, organization and quantity of data to comprise each submission. Sequence databases encouraged intuitive tools for data viewing, capture, and direct submission by authors. Neurophysiology required transcending these models with new datatypes. Time-series, histogram and bivariate datatypes, including illustration-like wrappers, were selected by their utility to the community of investigators. As interpretation of neurophysiological recordings depends on context supplied by metadata attributes, searches are via visual interfaces to sets of controlled-vocabulary metadata trees. Neurones, for example, can be specified by metadata describing functional and anatomical characteristics. Permanence is advanced by data model and data formats largely independent of contemporary technology or implementation, including Java and the XML standard. All user tools, including dynamic data viewers that serve as a virtual oscilloscope, are Java-based, free, multiplatform, and distributed by our application servers to any contemporary networked computer. Copyright is retained by submitters; viewer displays are dynamic and do not violate copyright of related journal figures. Panels of neurophysiologists view and test schemas and tools, enhancing community support.
NASA Astrophysics Data System (ADS)
Nedyalkov, Ivaylo
2016-11-01
After fifteen years of experience in rap, and ten in fluid mechanics, "I am coming here with high-Reynolds-number stamina; I can beat these rap folks whose flows are... laminar." The rap relates fluid flows to rap flows. The fluid concepts presented in the song have varying complexity and the listeners/viewers will be encouraged to read the explanations on a site dedicated to the rap. The music video will provide an opportunity to share high-quality fluid visualizations with a general audience. This talk will present the rap lyrics, the vision for the video, and the strategy for outreach. Suggestions and comments will be welcomed.
NASA Astrophysics Data System (ADS)
Onley, David; Steinberg, Gary
2004-04-01
The consequences of the Special Theory of Relativity are explored in a virtual world in which the speed of light is only 10 m/s. Ray tracing software and other visualization tools, modified to allow for the finite speed of light, are employed to create a video that brings to life a journey through this imaginary world. The aberation of light, the Doppler effect, the altered perception of time and power of incoming radiation are explored in separate segments of this 35 min video. Several of the effects observed are new and quite unexpected. A commentary and animated explanations help keep the viewer from losing all perspective.
NASA Astrophysics Data System (ADS)
Meertens, C.; Wier, S.; Ahern, T.; Casey, R.; Weertman, B.; Laughbon, C.
2008-12-01
UNAVCO and the IRIS DMC are data service partners for seismic visualization, particularly for hypocentral data and tomography. UNAVCO provides the GEON Integrated Data Viewer (IDV), an extension of the Unidata IDV, a free, interactive, research-level, software display and analysis tool for data in 3D (latitude, longitude, depth) and 4D (with time), located on or inside the Earth. The GEON IDV is designed to meet the challenge of investigating complex, multi-variate, time-varying, three- dimensional geoscience data in the context of new remote and shared data sources. The GEON IDV supports data access from data sources using HTTP and FTP servers, OPeNDAP servers, THREDDS catalogs, RSS feeds, and WMS (web map) servers. The IRIS DMC (Data Management System) has developed web services providing data for earthquake hypocentral data and seismic tomography model grids. These services can be called by the GEON IDV to access data at IRIS without copying files. The IRIS Earthquake Browser (IEB) is a web-based query tool for hypocentral data. The IEB combines the DMC's large database of more than 1,900,000 earthquakes with the Google Maps web interface. With the IEB you can quickly find earthquakes in any region of the globe and then import this information into the GEON Integrated Data Viewer where the hypocenters may be visualized. You can select earthquakes by location region, time, depth, and magnitude. The IEB gives the IDV a URL to the selected data. The IDV then shows the data as maps or 3D displays, with interactive control of vertical scale, area, map projection, with symbol size and color control by magnitude or depth. The IDV can show progressive time animation of, for example, aftershocks filling a source region. The IRIS Tomoserver converts seismic tomography model output grids to NetCDF for use in the IDV. The Tomoserver accepts a tomographic model file as input from a user and provides an equivalent NetCDF file as output. The service supports NA04, S3D, A1D and CUB input file formats, contributed by their respective creators. The NetCDF file is saved to a location that can be referenced with a URL on an IRIS server. The URL for the NetCDF file is provided to the user. The user can download the data from IRIS, or copy the URL into IDV directly for interpretation, and the IDV will access the data at IRIS. The Tomoserver conversion software was developed by Instrumental Software Technologies, Inc. Use cases with the GEON IDV and IRIS DMC data services will be shown.
The Effects of Intercultural Communication on Viewers' Perceptions.
ERIC Educational Resources Information Center
Pohl, Gayle M.
Three studies explored the impact of the controversial television docudrama "Death of a Princess" on viewers' attitudes, comprehension, and desire to continue viewing the film. Sixty students in undergraduate communication classes participated in Study I, which measured attitude change induced by the film, relative to the viewers' prior…
DNA sequence chromatogram browsing using JAVA and CORBA.
Parsons, J D; Buehler, E; Hillier, L
1999-03-01
DNA sequence chromatograms (traces) are the primary data source for all large-scale genomic and expressed sequence tags (ESTs) sequencing projects. Access to the sequencing trace assists many later analyses, for example contig assembly and polymorphism detection, but obtaining and using traces is problematic. Traces are not collected and published centrally, they are much larger than the base calls derived from them, and viewing them requires the interactivity of a local graphical client with local data. To provide efficient global access to DNA traces, we developed a client/server system based on flexible Java components integrated into other applications including an applet for use in a WWW browser and a stand-alone trace viewer. Client/server interaction is facilitated by CORBA middleware which provides a well-defined interface, a naming service, and location independence. [The software is packaged as a Jar file available from the following URL: http://www.ebi.ac.uk/jparsons. Links to working examples of the trace viewers can be found at http://corba.ebi.ac.uk/EST. All the Washington University mouse EST traces are available for browsing at the same URL.
Ross, Craig S; Ostroff, Joshua; Jernigan, David H
2014-02-01
Underage alcohol use is a global public health problem and alcohol advertising has been associated with underage drinking. The alcohol industry regulates itself and is the primary control on alcohol advertising in many countries around the world, advising trade association members to advertise only in adult-oriented media. Despite high levels of compliance with these self-regulatory guidelines, in several countries youth exposure to alcohol advertising on television has grown faster than adult exposure. In the United States, we found that exposure for underage viewers ages 18-20 grew from 2005 through 2011 faster than any adult age group. Applying a method adopted from a court in the US to identify underage targeting of advertising, we found evidence of targeting of alcohol advertising to underage viewers ages 18-20. The court's rule appeared in Lockyer v. Reynolds (The People ex rel. Bill Lockyer v. R.J. Reynolds Tobacco Company, GIC764118, 2002). We demonstrated that alcohol companies were able to modify their advertising practices to maintain current levels of adult advertising exposure while reducing youth exposure.
NASA Astrophysics Data System (ADS)
Sanders, B. F.
2017-12-01
Flooding of coastal and fluvial systems are the most significant natural hazards facing society, and damages have been escalating for decades globally and in the U.S. Almost all metropolitan areas are exposed to flood risk. The threat from river flooding is especially high in India and China, and coastal cities around the world are threatened by storm surge and rising sea levels. Several trends including rising sea levels, urbanization, deforestation, and rural-to-urban population shifts will increase flood exposure in the future. Flood impacts are escalating despite advances in hazards science and extensive effort to manage risks. The fundamental issue is not that flooding is becoming more severe, even though it is in some places, but rather that societies are become more vulnerable to flood impacts. A critical factor contributing to the escalation of flood impacts is that the most vulnerable sectors of communities are left out of processes to prepare for and respond to flooding. Furthermore, the translation of knowledge about flood hazards and vulnerabilities into actionable information for communities has not been effective. In Southern and Baja California, an interdisciplinary team of researchers has partnered with stakeholders in flood vulnerable communities to co-develop flood hazard information systems designed to meet end-user needs for decision-making. The initiative leveraged the power of advanced, fine-scale hydraulic models of flooding to craft intuitive visualizations of context-sensitive scenarios. This presentation will cover the ways by which the process of flood inundation modeling served as a focal point for knowledge development, as well as the unique visualizations that populate on-line information systems accessible here: http://floodrise.uci.edu/online-flood-hazard-viewers/
Tips for better visual elements in posters and podium presentations.
Zerwic, J J; Grandfield, K; Kavanaugh, K; Berger, B; Graham, L; Mershon, M
2010-08-01
The ability to effectively communicate through posters and podium presentations using appropriate visual content and style is essential for health care educators. To offer suggestions for more effective visual elements of posters and podium presentations. We present the experiences of our multidisciplinary publishing group, whose combined experiences and collaboration have provided us with an understanding of what works and how to achieve success when working on presentations and posters. Many others would offer similar advice, as these guidelines are consistent with effective presentation. FINDINGS/SUGGESTIONS: Certain visual elements should be attended to in any visual presentation: consistency, alignment, contrast and repetition. Presentations should be consistent in font size and type, line spacing, alignment of graphics and text, and size of graphics. All elements should be aligned with at least one other element. Contrasting light background with dark text (and vice versa) helps an audience read the text more easily. Standardized formatting lets viewers know when they are looking at similar things (tables, headings, etc.). Using a minimal number of colors (four at most) helps the audience more easily read text. For podium presentations, have one slide for each minute allotted for speaking. The speaker is also a visual element; one should not allow the audience's view of either the presentation or presenter to be blocked. Making eye contact with the audience also keeps them visually engaged. Health care educators often share information through posters and podium presentations. These tips should help the visual elements of presentations be more effective.
Lee, Irene Eunyoung; Latchoumane, Charles-Francois V.; Jeong, Jaeseung
2017-01-01
Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects. PMID:28421007
Lee, Irene Eunyoung; Latchoumane, Charles-Francois V; Jeong, Jaeseung
2017-01-01
Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (-) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.
Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu
2018-01-01
Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.
A Visual Interface for Querying Heterogeneous Phylogenetic Databases.
Jamil, Hasan M
2017-01-01
Despite the recent growth in the number of phylogenetic databases, access to these wealth of resources remain largely tool or form-based interface driven. It is our thesis that the flexibility afforded by declarative query languages may offer the opportunity to access these repositories in a better way, and to use such a language to pose truly powerful queries in unprecedented ways. In this paper, we propose a substantially enhanced closed visual query language, called PhyQL, that can be used to query phylogenetic databases represented in a canonical form. The canonical representation presented helps capture most phylogenetic tree formats in a convenient way, and is used as the storage model for our PhyloBase database for which PhyQL serves as the query language. We have implemented a visual interface for the end users to pose PhyQL queries using visual icons, and drag and drop operations defined over them. Once a query is posed, the interface translates the visual query into a Datalog query for execution over the canonical database. Responses are returned as hyperlinks to phylogenies that can be viewed in several formats using the tree viewers supported by PhyloBase. Results cached in PhyQL buffer allows secondary querying on the computed results making it a truly powerful querying architecture.
Eye movements, visual search and scene memory, in an immersive virtual environment.
Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.
Memory-guided attention during active viewing of edited dynamic scenes.
Valuch, Christian; König, Peter; Ansorge, Ulrich
2017-01-01
Films, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movie's continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewer's active matching of scene content across cuts.
Canvas and cosmos: Visual art techniques applied to astronomy data
NASA Astrophysics Data System (ADS)
English, Jayanne
Bold color images from telescopes act as extraordinary ambassadors for research astronomers because they pique the public’s curiosity. But are they snapshots documenting physical reality? Or are we looking at artistic spacescapes created by digitally manipulating astronomy images? This paper provides a tour of how original black and white data, from all regimes of the electromagnetic spectrum, are converted into the color images gracing popular magazines, numerous websites, and even clothing. The history and method of the technical construction of these images is outlined. However, the paper focuses on introducing the scientific reader to visual literacy (e.g. human perception) and techniques from art (e.g. composition, color theory) since these techniques can produce not only striking but politically powerful public outreach images. When created by research astronomers, the cultures of science and visual art can be balanced and the image can illuminate scientific results sufficiently strongly that the images are also used in research publications. Included are reflections on how they could feedback into astronomy research endeavors and future forms of visualization as well as on the relevance of outreach images to visual art. (See the color online PDF version at http://dx.doi.org/10.1142/S0218271817300105; the figures can be enlarged in PDF viewers.)
Visually representing reality: aesthetics and accessibility aspects
NASA Astrophysics Data System (ADS)
van Nes, Floris L.
2009-02-01
This paper gives an overview of the visual representation of reality with three imaging technologies: painting, photography and electronic imaging. The contribution of the important image aspects, called dimensions hereafter, such as color, fine detail and total image size, to the degree of reality and aesthetic value of the rendered image are described for each of these technologies. Whereas quite a few of these dimensions - or approximations, or even only suggestions thereof - were already present in prehistoric paintings, apparent motion and true stereoscopic vision only recently were added - unfortunately also introducing accessibility and image safety issues. Efforts are made to reduce the incidence of undesirable biomedical effects such as photosensitive seizures (PSS), visually induced motion sickness (VIMS), and visual fatigue from stereoscopic images (VFSI) by international standardization of the image parameters to be avoided by image providers and display manufacturers. The history of this type of standardization, from an International Workshop Agreement to a strategy for accomplishing effective international standardization by ISO, is treated at some length. One of the difficulties to be mastered in this process is the reconciliation of the, sometimes opposing, interests of vulnerable persons, thrill-seeking viewers, creative video designers and the game industry.
NASA Astrophysics Data System (ADS)
Roccatello, E.; Nozzi, A.; Rumor, M.
2013-05-01
This paper illustrates the key concepts behind the design and the development of a framework, based on OGC services, capable to visualize 3D large scale geospatial data streamed over the web. WebGISes are traditionally bounded to a bi-dimensional simplified representation of the reality and though they are successfully addressing the lack of flexibility and simplicity of traditional desktop clients, a lot of effort is still needed to reach desktop GIS features, like 3D visualization. The motivations behind this work lay in the widespread availability of OGC Web Services inside government organizations and in the technology support to HTML 5 and WebGL standard of the web browsers. This delivers an improved user experience, similar to desktop applications, therefore allowing to augment traditional WebGIS features with a 3D visualization framework. This work could be seen as an extension of the Cityvu project, started in 2008 with the aim of a plug-in free OGC CityGML viewer. The resulting framework has also been integrated in existing 3DGIS software products and will be made available in the next months.
Virtual reality and 3D animation in forensic visualization.
Ma, Minhua; Zheng, Huiru; Lallie, Harjinder
2010-09-01
Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.
Poison frog colors are honest signals of toxicity, particularly for bird predators.
Maan, Martine E; Cummings, Molly E
2012-01-01
Antipredator defenses and warning signals typically evolve in concert. However, the extensive variation across taxa in both these components of predator deterrence and the relationship between them are poorly understood. Here we test whether there is a predictive relationship between visual conspicuousness and toxicity levels across 10 populations of the color-polymorphic strawberry poison frog, Dendrobates pumilio. Using a mouse-based toxicity assay, we find extreme variation in toxicity between frog populations. This variation is significantly positively correlated with frog coloration brightness, a viewer-independent measure of visual conspicuousness (i.e., total reflectance flux). We also examine conspicuousness from the view of three potential predator taxa, as well as conspecific frogs, using taxon-specific visual detection models and three natural background substrates. We find very strong positive relationships between frog toxicity and conspicuousness for bird-specific perceptual models. Weaker but still positive correlations are found for crab and D. pumilio conspecific visual perception, while frog coloration as viewed by snakes is not related to toxicity. These results suggest that poison frog colors can be honest signals of prey unpalatability to predators and that birds in particular may exert selection on aposematic signal design. © 2011 by The University of Chicago.
Hegarty, Mary; Canham, Matt S; Fabrikant, Sara I
2010-01-01
Three experiments examined how bottom-up and top-down processes interact when people view and make inferences from complex visual displays (weather maps). Bottom-up effects of display design were investigated by manipulating the relative visual salience of task-relevant and task-irrelevant information across different maps. Top-down effects of domain knowledge were investigated by examining performance and eye fixations before and after participants learned relevant meteorological principles. Map design and knowledge interacted such that salience had no effect on performance before participants learned the meteorological principles; however, after learning, participants were more accurate if they viewed maps that made task-relevant information more visually salient. Effects of display design on task performance were somewhat dissociated from effects of display design on eye fixations. The results support a model in which eye fixations are directed primarily by top-down factors (task and domain knowledge). They suggest that good display design facilitates performance not just by guiding where viewers look in a complex display but also by facilitating processing of the visual features that represent task-relevant information at a given display location. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Scharl, Arno; Hubmann-Haidvogel, Alexander; Jones, Alistair; Fischl, Daniel; Kamolov, Ruslan; Weichselbraun, Albert; Rafelsberger, Walter
2016-01-01
This paper presents a Web intelligence portal that captures and aggregates news and social media coverage about "Game of Thrones", an American drama television series created for the HBO television network based on George R.R. Martin's series of fantasy novels. The system collects content from the Web sites of Anglo-American news media as well as from four social media platforms: Twitter, Facebook, Google+ and YouTube. An interactive dashboard with trend charts and synchronized visual analytics components not only shows how often Game of Thrones events and characters are being mentioned by journalists and viewers, but also provides a real-time account of concepts that are being associated with the unfolding storyline and each new episode. Positive or negative sentiment is computed automatically, which sheds light on the perception of actors and new plot elements.
Alview: Portable Software for Viewing Sequence Reads in BAM Formatted Files.
Finney, Richard P; Chen, Qing-Rong; Nguyen, Cu V; Hsu, Chih Hao; Yan, Chunhua; Hu, Ying; Abawi, Massih; Bian, Xiaopeng; Meerzaman, Daoud M
2015-01-01
The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview. The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview.
ePMV embeds molecular modeling into professional animation software environments.
Johnson, Graham T; Autin, Ludovic; Goodsell, David S; Sanner, Michel F; Olson, Arthur J
2011-03-09
Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties, and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. Copyright © 2011 Elsevier Ltd. All rights reserved.
Visualization of multi-INT fusion data using Java Viewer (JVIEW)
NASA Astrophysics Data System (ADS)
Blasch, Erik; Aved, Alex; Nagy, James; Scott, Stephen
2014-05-01
Visualization is important for multi-intelligence fusion and we demonstrate issues for presenting physics-derived (i.e., hard) and human-derived (i.e., soft) fusion results. Physics-derived solutions (e.g., imagery) typically involve sensor measurements that are objective, while human-derived (e.g., text) typically involve language processing. Both results can be geographically displayed for user-machine fusion. Attributes of an effective and efficient display are not well understood, so we demonstrate issues and results for filtering, correlation, and association of data for users - be they operators or analysts. Operators require near-real time solutions while analysts have the opportunities of non-real time solutions for forensic analysis. In a use case, we demonstrate examples using the JVIEW concept that has been applied to piloting, space situation awareness, and cyber analysis. Using the open-source JVIEW software, we showcase a big data solution for multi-intelligence fusion application for context-enhanced information fusion.
Calculation and Visualization of Atomistic Mechanical Stresses in Nanomaterials and Biomolecules
Gilson, Michael K.
2014-01-01
Many biomolecules have machine-like functions, and accordingly are discussed in terms of mechanical properties like force and motion. However, the concept of stress, a mechanical property that is of fundamental importance in the study of macroscopic mechanics, is not commonly applied in the biomolecular context. We anticipate that microscopical stress analyses of biomolecules and nanomaterials will provide useful mechanistic insights and help guide molecular design. To enable such applications, we have developed Calculator of Atomistic Mechanical Stress (CAMS), an open-source software package for computing atomic resolution stresses from molecular dynamics (MD) simulations. The software also enables decomposition of stress into contributions from bonded, nonbonded and Generalized Born potential terms. CAMS reads GROMACS topology and trajectory files, which are easily generated from AMBER files as well; and time-varying stresses may be animated and visualized in the VMD viewer. Here, we review relevant theory and present illustrative applications. PMID:25503996
ePMV Embeds Molecular Modeling into Professional Animation Software Environments
Johnson, Graham T.; Autin, Ludovic; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.
2011-01-01
SUMMARY Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers, we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. PMID:21397181
An investigation into the use of color as a device to convey memes during the Little Ice Age
NASA Astrophysics Data System (ADS)
White, Peter A.
Color is used as a tool in visual communication to express ideas in a symbolic fashion. It can also be used as a guide to assist the viewer in the visual narrative. Artwork created in the period of time between 1300 to 1850 in northern and central Europe provides a comprehensive perspective in the use of color as symbol and color as an elucidative devise. This period of time is known as the Little Ice Age, the duration of which spans European history between the Medieval period and the Romantic era. The extreme climatic conditions of this era caused profound changes in society on many levels and influenced the use of color in paintings throughout this chapter in history. The new paradigm of the science of ideas, called memetics, provides a framework to analyze the expression of ideas through the use of color within this span of time.
Interactive Visualization of Computational Fluid Dynamics using Mosaic
NASA Technical Reports Server (NTRS)
Clucas, Jean; Watson, Velvin; Chancellor, Marisa K. (Technical Monitor)
1994-01-01
The Web provides new Methods for accessing Information world-wide, but the current text-and-pictures approach neither utilizes all the Web's possibilities not provides for its limitations. While the inclusion of pictures and animations in a paper communicates more effectively than text alone, It Is essentially an extension of the concept of "publication." Also, as use of the Web increases putting images and animations online will quickly load even the "Information Superhighway." We need to find forms of communication that take advantage of the special nature of the Web. This paper presents one approach: the use of the Internet and the Mosaic interface for data sharing and collaborative analysis. We will describe (and In the presentation, demonstrate) our approach: using FAST (Flow Analysis Software Toolkit), a scientific visualization package, as a data viewer and interactive tool called from MOSAIC. Our intent is to stimulate the development of other tools that utilize the unique nature of electronic communication.
Weather uncertainty versus climate change uncertainty in a short television weather broadcast
NASA Astrophysics Data System (ADS)
Witte, J.; Ward, B.; Maibach, E.
2011-12-01
For TV meteorologists talking about uncertainty in a two-minute forecast can be a real challenge. It can quickly open the way to viewer confusion. TV meteorologists understand the uncertainties of short term weather models and have different methods to convey the degrees of confidence to the viewing public. Visual examples are seen in the 7-day forecasts and the hurricane track forecasts. But does the public really understand a 60 percent chance of rain or the hurricane cone? Communication of climate model uncertainty is even more daunting. The viewing public can quickly switch to denial of solid science. A short review of the latest national survey of TV meteorologists by George Mason University and lessons learned from a series of climate change workshops with TV broadcasters provide valuable insights into effectively using visualizations and invoking multimedia-learning theories in weather forecasts to improve public understanding of climate change.
Visual perception of male body attractiveness.
Fan, J; Dai, W; Liu, F; Wu, J
2005-02-07
Based on 69 scanned Chinese male subjects and 25 Caucasian male subjects, the present study showed that the volume height index (VHI) is the most important visual cue to male body attractiveness of young Chinese viewers among the many body parameters examined in the study. VHI alone can explain ca. 73% of the variance of male body attractiveness ratings. The effect of VHI can be fitted with two half bell-shaped exponential curves with an optimal VHI at 17.6 l m(-2) and 18.0 l m(-2) for female raters and male raters, respectively. In addition to VHI, other body parameters or ratios can have small, but significant effects on male body attractiveness. Body proportions associated with fitness will enhance male body attractiveness. It was also found that there is an optimal waist-to-hip ratio (WHR) at 0.8 and deviations from this optimal WHR reduce male body attractiveness.
PDBFlex: exploring flexibility in protein structures
Hrabe, Thomas; Li, Zhanwen; Sedova, Mayya; Rotkiewicz, Piotr; Jaroszewski, Lukasz; Godzik, Adam
2016-01-01
The PDBFlex database, available freely and with no login requirements at http://pdbflex.org, provides information on flexibility of protein structures as revealed by the analysis of variations between depositions of different structural models of the same protein in the Protein Data Bank (PDB). PDBFlex collects information on all instances of such depositions, identifying them by a 95% sequence identity threshold, performs analysis of their structural differences and clusters them according to their structural similarities for easy analysis. The PDBFlex contains tools and viewers enabling in-depth examination of structural variability including: 2D-scaling visualization of RMSD distances between structures of the same protein, graphs of average local RMSD in the aligned structures of protein chains, graphical presentation of differences in secondary structure and observed structural disorder (unresolved residues), difference distance maps between all sets of coordinates and 3D views of individual structures and simulated transitions between different conformations, the latter displayed using JSMol visualization software. PMID:26615193
Calculation and visualization of atomistic mechanical stresses in nanomaterials and biomolecules.
Fenley, Andrew T; Muddana, Hari S; Gilson, Michael K
2014-01-01
Many biomolecules have machine-like functions, and accordingly are discussed in terms of mechanical properties like force and motion. However, the concept of stress, a mechanical property that is of fundamental importance in the study of macroscopic mechanics, is not commonly applied in the biomolecular context. We anticipate that microscopical stress analyses of biomolecules and nanomaterials will provide useful mechanistic insights and help guide molecular design. To enable such applications, we have developed Calculator of Atomistic Mechanical Stress (CAMS), an open-source software package for computing atomic resolution stresses from molecular dynamics (MD) simulations. The software also enables decomposition of stress into contributions from bonded, nonbonded and Generalized Born potential terms. CAMS reads GROMACS topology and trajectory files, which are easily generated from AMBER files as well; and time-varying stresses may be animated and visualized in the VMD viewer. Here, we review relevant theory and present illustrative applications.
The Virtual Pelvic Floor, a tele-immersive educational environment.
Pearl, R. K.; Evenhouse, R.; Rasmussen, M.; Dech, F.; Silverstein, J. C.; Prokasy, S.; Panko, W. B.
1999-01-01
This paper describes the development of the Virtual Pelvic Floor, a new method of teaching the complex anatomy of the pelvic region utilizing virtual reality and advanced networking technology. Virtual reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereo vision, viewer-centered perspective, large angles of view, and interactivity. Two or more ImmersaDesk systems, drafting table format virtual reality displays, are networked together providing an environment where teacher and students share a high quality three-dimensional anatomical model, and are able to converse, see each other, and to point in three dimensions to indicate areas of interest. This project was realized by the teamwork of surgeons, medical artists and sculptors, computer scientists, and computer visualization experts. It demonstrates the future of virtual reality for surgical education and applications for the Next Generation Internet. Images Figure 1 Figure 2 Figure 3 PMID:10566378
KFC Server: interactive forecasting of protein interaction hot spots.
Darnell, Steven J; LeGault, Laura; Mitchell, Julie C
2008-07-01
The KFC Server is a web-based implementation of the KFC (Knowledge-based FADE and Contacts) model-a machine learning approach for the prediction of binding hot spots, or the subset of residues that account for most of a protein interface's; binding free energy. The server facilitates the automated analysis of a user submitted protein-protein or protein-DNA interface and the visualization of its hot spot predictions. For each residue in the interface, the KFC Server characterizes its local structural environment, compares that environment to the environments of experimentally determined hot spots and predicts if the interface residue is a hot spot. After the computational analysis, the user can visualize the results using an interactive job viewer able to quickly highlight predicted hot spots and surrounding structural features within the protein structure. The KFC Server is accessible at http://kfc.mitchell-lab.org.
KFC Server: interactive forecasting of protein interaction hot spots
Darnell, Steven J.; LeGault, Laura; Mitchell, Julie C.
2008-01-01
The KFC Server is a web-based implementation of the KFC (Knowledge-based FADE and Contacts) model—a machine learning approach for the prediction of binding hot spots, or the subset of residues that account for most of a protein interface's; binding free energy. The server facilitates the automated analysis of a user submitted protein–protein or protein–DNA interface and the visualization of its hot spot predictions. For each residue in the interface, the KFC Server characterizes its local structural environment, compares that environment to the environments of experimentally determined hot spots and predicts if the interface residue is a hot spot. After the computational analysis, the user can visualize the results using an interactive job viewer able to quickly highlight predicted hot spots and surrounding structural features within the protein structure. The KFC Server is accessible at http://kfc.mitchell-lab.org. PMID:18539611
Display characterization by eye: contrast ratio and discrimination throughout the grayscale
NASA Astrophysics Data System (ADS)
Gille, Jennifer; Arend, Larry; Larimer, James O.
2004-06-01
We have measured the ability of observers to estimate the contrast ratio (maximum white luminance / minimum black or gray) of various displays and to assess luminous discrimination over the tonescale of the display. This was done using only the computer itself and easily-distributed devices such as neutral density filters. The ultimate goal of this work is to see how much of the characterization of a display can be performed by the ordinary user in situ, in a manner that takes advantage of the unique abilities of the human visual system and measures visually important aspects of the display. We discuss the relationship among contrast ratio, tone scale, display transfer function and room lighting. These results may contribute to the development of applications that allow optimization of displays for the situated viewer / display system without instrumentation and without indirect inferences from laboratory to workplace.
Scharl, Arno; Hubmann-Haidvogel, Alexander; Jones, Alistair; Fischl, Daniel; Kamolov, Ruslan; Weichselbraun, Albert; Rafelsberger, Walter
2016-01-01
This paper presents a Web intelligence portal that captures and aggregates news and social media coverage about “Game of Thrones”, an American drama television series created for the HBO television network based on George R.R. Martin’s series of fantasy novels. The system collects content from the Web sites of Anglo-American news media as well as from four social media platforms: Twitter, Facebook, Google+ and YouTube. An interactive dashboard with trend charts and synchronized visual analytics components not only shows how often Game of Thrones events and characters are being mentioned by journalists and viewers, but also provides a real-time account of concepts that are being associated with the unfolding storyline and each new episode. Positive or negative sentiment is computed automatically, which sheds light on the perception of actors and new plot elements. PMID:27065510
White constancy method for mobile displays
NASA Astrophysics Data System (ADS)
Yum, Ji Young; Park, Hyun Hee; Jang, Seul Ki; Lee, Jae Hyang; Kim, Jong Ho; Yi, Ji Young; Lee, Min Woo
2014-03-01
In these days, consumer's needs for image quality of mobile devices are increasing as smartphone is widely used. For example, colors may be perceived differently when displayed contents under different illuminants. Displayed white in incandescent lamp is perceived as bluish, while same content in LED light is perceived as yellowish. When changed in perceived white under illuminant environment, image quality would be degraded. Objective of the proposed white constancy method is restricted to maintain consistent output colors regardless of the illuminants utilized. Human visual experiments are performed to analyze viewers'perceptual constancy. Participants are asked to choose the displayed white in a variety of illuminants. Relationship between the illuminants and the selected colors with white are modeled by mapping function based on the results of human visual experiments. White constancy values for image control are determined on the predesigned functions. Experimental results indicate that propsed method yields better image quality by keeping the display white.
A Case-Based Study with Radiologists Performing Diagnosis Tasks in Virtual Reality.
Venson, José Eduardo; Albiero Berni, Jean Carlo; Edmilson da Silva Maia, Carlos; Marques da Silva, Ana Maria; Cordeiro d'Ornellas, Marcos; Maciel, Anderson
2017-01-01
In radiology diagnosis, medical images are most often visualized slice by slice. At the same time, the visualization based on 3D volumetric rendering of the data is considered useful and has increased its field of application. In this work, we present a case-based study with 16 medical specialists to assess the diagnostic effectiveness of a Virtual Reality interface in fracture identification over 3D volumetric reconstructions. We developed a VR volume viewer compatible with both the Oculus Rift and handheld-based head mounted displays (HMDs). We then performed user experiments to validate the approach in a diagnosis environment. In addition, we assessed the subjects' perception of the 3D reconstruction quality, ease of interaction and ergonomics, and also the users opinion on how VR applications can be useful in healthcare. Among other results, we have found a high level of effectiveness of the VR interface in identifying superficial fractures on head CTs.
Subjective evaluation of mobile 3D video content: depth range versus compression artifacts
NASA Astrophysics Data System (ADS)
Jumisko-Pyykkö, Satu; Haustola, Tomi; Boev, Atanas; Gotchev, Atanas
2011-02-01
Mobile 3D television is a new form of media experience, which combines the freedom of mobility with the greater realism of presenting visual scenes in 3D. Achieving this combination is a challenging task as greater viewing experience has to be achieved with the limited resources of the mobile delivery channel such as limited bandwidth and power constrained handheld player. This challenge sets need for tight optimization of the overall mobile 3DTV system. Presence of depth and compression artifacts in the played 3D video are two major factors that influence viewer's subjective quality of experience and satisfaction. The primary goal of this study has been to examine the influence of varying depth and compression artifacts on the subjective quality of experience for mobile 3D video content. In addition, the influence of the studied variables on simulator sickness symptoms has been studied and vocabulary-based descriptive quality of experience has been conducted for a sub-set of variables in order to understand the perceptual characteristics in detail. In the experiment, 30 participants have evaluated the overall quality of different 3D video contents with varying depth ranges and compressed with varying quantization parameters. The test video content has been presented on a portable autostereoscopic LCD display with horizontal double density pixel arrangement. The results of the psychometric study indicate that compression artifacts are a dominant factor determining the quality of experience compared to varying depth range. More specifically, contents with strong compression has been rejected by the viewers and deemed unacceptable. The results of descriptive study confirm the dominance of visible spatial artifacts along the added value of depth for artifact-free content. The level of visual discomfort has been determined as not offending.
Perception in statistical graphics
NASA Astrophysics Data System (ADS)
VanderPlas, Susan Ruth
There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.
Valadez, Victor; Ysunza, Antonio; Ocharan-Hernandez, Esther; Garrido-Bustamante, Norma; Sanchez-Valerio, Araceli; Pamplona, Ma C
2012-09-01
Vocal Nodules (VN) are a functional voice disorder associated with voice misuse and abuse in children. There are few reports addressing vocal parameters in children with VN, especially after a period of vocal rehabilitation. The purpose of this study is to describe measurements of vocal parameters including Fundamental Frequency (FF), Shimmer (S), and Jitter (J), videonasolaryngoscopy examination and clinical perceptual assessment, before and after voice therapy in children with VN. Voice therapy was provided using visual support through Speech-Viewer software. Twenty patients with VN were studied. An acoustical analysis of voice was performed and compared with data from subjects from a control group matched by age and gender. Also, clinical perceptual assessment of voice and videonasolaryngoscopy were performed to all patients with VN. After a period of voice therapy, provided with visual support using Speech Viewer-III (SV-III-IBM) software, new acoustical analyses, perceptual assessments and videonasolaryngoscopies were performed. Before the onset of voice therapy, there was a significant difference (p<0.05) in mean FF, S and J, between the patients with VN and subjects from the control group. After the voice therapy period, a significant improvement (p<0.05) was found in all acoustic voice parameters. Moreover, perceptual voice analysis demonstrated improvement in all cases. Finally, videonasolaryngoscopy demonstrated that vocal nodules were no longer discernible on the vocal folds in any of the cases. SV-III software seems to be a safe and reliable method for providing voice therapy in children with VN. Acoustic voice parameters, perceptual data and videonasolaryngoscopy were significantly improved after the speech therapy period was completed. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nguyen, A.; Mueller, C.; Brooks, A. N.; Kislik, E. A.; Baney, O. N.; Ramirez, C.; Schmidt, C.; Torres-Perez, J. L.
2014-12-01
The Sierra Nevada is experiencing changes in hydrologic regimes, such as decreases in snowmelt and peak runoff, which affect forest health and the availability of water resources. Currently, the USDA Forest Service Region 5 is undergoing Forest Plan revisions to include climate change impacts into mitigation and adaptation strategies. However, there are few processes in place to conduct quantitative assessments of forest conditions in relation to mountain hydrology, while easily and effectively delivering that information to forest managers. To assist the USDA Forest Service, this study is the final phase of a three-term project to create a Decision Support System (DSS) to allow ease of access to historical and forecasted hydrologic, climatic, and terrestrial conditions for the entire Sierra Nevada. This data is featured within three components of the DSS: the Mapping Viewer, Statistical Analysis Portal, and Geospatial Data Gateway. Utilizing ArcGIS Online, the Sierra DSS Mapping Viewer enables users to visually analyze and locate areas of interest. Once the areas of interest are targeted, the Statistical Analysis Portal provides subbasin level statistics for each variable over time by utilizing a recently developed web-based data analysis and visualization tool called Plotly. This tool allows users to generate graphs and conduct statistical analyses for the Sierra Nevada without the need to download the dataset of interest. For more comprehensive analysis, users are also able to download datasets via the Geospatial Data Gateway. The third phase of this project focused on Python-based data processing, the adaptation of the multiple capabilities of ArcGIS Online and Plotly, and the integration of the three Sierra DSS components within a website designed specifically for the USDA Forest Service.
AIRS Version 6 Products and Data Services at NASA GES DISC
NASA Astrophysics Data System (ADS)
Ding, F.; Savtchenko, A. K.; Hearty, T. J.; Theobald, M. L.; Vollmer, B.; Esfandiari, E.
2013-12-01
The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for data from the Atmospheric Infrared Sounder (AIRS) mission. The AIRS mission is entering its 11th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing longwave radiation, cloud properties, and trace gases. The GES DISC, in collaboration with the AIRS Project, released data from the Version 6 algorithm in early 2013. The new algorithm represents a significant improvement over previous versions in terms of greater stability, yield, and quality of products. Among the most substantial advances are: improved soundings of Tropospheric and Sea Surface Temperatures; larger improvements with increasing cloud cover; improved retrievals of surface spectral emissivity; near-complete removal of spurious temperature bias trends seen in earlier versions; substantially improved retrieval yield (i.e., number of soundings accepted for output) for climate studies; AIRS-Only retrievals with comparable accuracy to AIRS+AMSU (Advanced Microwave Sounding Unit) retrievals; and more realistic hemispheric seasonal variability and global distribution of carbon monoxide. The GES DISC is working to bring the distribution services up-to-date with these new developments. Our focus is on popular services, like variable subsetting and quality screening, which are impacted by the new elements in Version 6. Other developments in visualization services, such as Giovanni, Near-Real Time imagery, and a granule-map viewer, are progressing along with the introduction of the new data; each service presents its own challenge. This presentation will demonstrate the most significant improvements in Version 6 AIRS products, such as newly added variables (higher resolution outgoing longwave radiation, new cloud property products, etc.), the new quality control schema, and improved retrieval yields. We will also demonstrate the various distribution and visualization services for AIRS data products. The cloud properties, model physics, and water and energy cycles research communities are invited to take advantage of the improvements in Version 6 AIRS products and the various services at GES DISC which provide them.
Art-inspired Presentation of Earth Science Research
NASA Astrophysics Data System (ADS)
Bugbee, K.; Smith, D. K.; Smith, T.; Conover, H.; Robinson, E.
2016-12-01
This presentation features two posters inspired by modern and contemporary art that showcase different Earth science data at NASA's Global Hydrology Resource Center Distributed Active Archive Center (GHRC DAAC). The posters are intended for the science-interested public. They are designed to tell an interesting story and to stimulate interest in the science behind the art. "Water makes the World" is a photo mosaic of cloud water droplet and ice crystal images combined to depict the Earth in space. The individual images were captured using microphysical probes installed on research aircraft flown in the Mid-latitude Continental Convective Clouds Experiment (MC3E). MC3E was one of a series of ground validation field experiments for NASA's Global Precipitation Measurement (GPM) mission which collected ground and airborne precipitation datasets supporting the physical validation of satellite-based precipitation retrieval algorithms. "The Lightning Capital of the World" is laid out on a grid of black lines and primary colors in the style of Piet Mondrian. This neoplastic or "new plastic art" style was founded in the Netherlands and was used in art from 1917 to 1931. The poster colorfully describes the Catatumbo lightning phenomenon from a scientific, social and historical perspective. It is a still representation of a moving art project. To see this poster in action, visit the GHRC YouTube channel at http://tinyurl.com/hd6crx8 or stop by during the poster session. Both posters were created for a special Research as Art session at the 2016 Federation of Earth Science Information Partners (ESIP) summer meeting in Durham, NC. This gallery-style event challenged attendees to use visual media to show how the ESIP community uses data. Both of these visually appealing posters draw the viewer in and then provide information on the science data used, as well as links for more information available. The GHRC DAAC is a joint venture of NASA's Marshall Space Flight Center and the Information Technology and Systems Center at UAH. GHRC provides a comprehensive active archive of both data and knowledge augmentation services
Gunter, B; Furnham, A
1984-06-01
This paper reports two studies which examined the mediating effects of programme genre and physical form of violence on viewers' perceptions of violent TV portrayals. In Expt 1, a panel of British viewers saw portrayals from five programme genres: British crime-drama series, US crime-drama series, westerns, science-fiction series and cartoons which feature either fights or shootings. In Expt. 2, the same viewers rated portrayals from British crime-drama and westerns which featured four types of violence, fist-fights, shootings, stabbings and explosions. All scenes were rated along eight unipolar scales. Panel members also completed four subscales of a personal hostility inventory. Results showed that both fictional setting and physical form had significant effects on viewers' perceptions of televised violence. British crime-drama portrayals, and portrayals that featured shootings and stabbings, were rated as most violent and disturbing. Also, there were strong differences between viewers with different self-reported propensities towards either verbal or physical aggression. More physically aggressive individuals tended to perceive physical unarmed violence as less violent than did more verbally aggressive types.
Dendroscope: An interactive viewer for large phylogenetic trees
Huson, Daniel H; Richter, Daniel C; Rausch, Christian; Dezulian, Tobias; Franz, Markus; Rupp, Regula
2007-01-01
Background Research in evolution requires software for visualizing and editing phylogenetic trees, for increasingly very large datasets, such as arise in expression analysis or metagenomics, for example. It would be desirable to have a program that provides these services in an effcient and user-friendly way, and that can be easily installed and run on all major operating systems. Although a large number of tree visualization tools are freely available, some as a part of more comprehensive analysis packages, all have drawbacks in one or more domains. They either lack some of the standard tree visualization techniques or basic graphics and editing features, or they are restricted to small trees containing only tens of thousands of taxa. Moreover, many programs are diffcult to install or are not available for all common operating systems. Results We have developed a new program, Dendroscope, for the interactive visualization and navigation of phylogenetic trees. The program provides all standard tree visualizations and is optimized to run interactively on trees containing hundreds of thousands of taxa. The program provides tree editing and graphics export capabilities. To support the inspection of large trees, Dendroscope offers a magnification tool. The software is written in Java 1.4 and installers are provided for Linux/Unix, MacOS X and Windows XP. Conclusion Dendroscope is a user-friendly program for visualizing and navigating phylogenetic trees, for both small and large datasets. PMID:18034891
Towards a New Generation of Time-Series Visualization Tools in the ESA Heliophysics Science Archives
NASA Astrophysics Data System (ADS)
Perez, H.; Martinez, B.; Cook, J. P.; Herment, D.; Fernandez, M.; De Teodoro, P.; Arnaud, M.; Middleton, H. R.; Osuna, P.; Arviset, C.
2017-12-01
During the last decades a varied set of Heliophysics missions have allowed the scientific community to gain a better knowledge on the solar atmosphere and activity. The remote sensing images of missions such as SOHO have paved the ground for Helio-based spatial data visualization software such as JHelioViewer/Helioviewer. On the other hand, the huge amount of in-situ measurements provided by other missions such as Cluster provide a wide base for plot visualization software whose reach is still far from being fully exploited. The Heliophysics Science Archives within the ESAC Science Data Center (ESDC) already provide a first generation of tools for time-series visualization focusing on each mission's needs: visualization of quicklook plots, cross-calibration time series, pre-generated/on-demand multi-plot stacks (Cluster), basic plot zoom in/out options (Ulysses) and easy navigation through the plots in time (Ulysses, Cluster, ISS-Solaces). However, as the needs evolve and the scientists involved in new missions require to plot multi-variable data, heat maps stacks interactive synchronization and axis variable selection among other improvements. The new Heliophysics archives (such as Solar Orbiter) and the evolution of existing ones (Cluster) intend to address these new challenges. This paper provides an overview of the different approaches for visualizing time-series followed within the ESA Heliophysics Archives and their foreseen evolution.
On learning science and pseudoscience from prime-time television programming
NASA Astrophysics Data System (ADS)
Whittle, Christopher Henry
The purpose of the present dissertation is to determine whether the viewing of two particular prime-time television programs, ER and The X-Files, increases viewer knowledge of science and to identify factors that may influence learning from entertainment television programming. Viewer knowledge of scientific dialogue from two science-based prime-time television programs, ER, a serial drama in a hospital emergency room and The X-Files, a drama about two Federal Bureau of Investigation agents who pursue alleged extraterrestrial life and paranormal activity, is studied. Level of viewing, education level, science education level, experiential factors, level of parasocial interaction, and demographic characteristics are assessed as independent variables affecting learning from entertainment television viewing. The present research involved a nine-month long content analysis of target television program dialogue and data collection from an Internet-based survey questionnaire posted to target program-specific on-line "chat" groups. The present study demonstrated that entertainment television program viewers incidentally learn science from entertainment television program dialogue. The more they watch, the more they learn. Viewing a pseudoscientific fictional television program does necessarily influence viewer beliefs in pseudoscience. Higher levels of formal science study are reflected in more science learning and less learning of pseudoscience from entertainment television program viewing. Pseudoscience learning from entertainment television programming is significantly related to experience with paranormal phenomena, higher levels of viewer parasocial interaction, and specifically, higher levels of cognitive parasocial interaction. In summary, the greater a viewer's understanding of science the more they learn when they watch their favorite science-based prime-time television programs. Viewers of pseudoscience-based prime-time television programming with higher levels of paranormal experiences and parasocial interaction demonstrate cognitive interest in and learning of their favorite television program characters ideas and beliefs. What television viewers learn from television is related to what they bring to the viewing experience. Television viewers are always learning, even when their intentions are to simply relax and watch the tube.
The Direct Lighting Computation in Global Illumination Methods
NASA Astrophysics Data System (ADS)
Wang, Changyaw Allen
1994-01-01
Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.
Efficient in-situ visualization of unsteady flows in climate simulation
NASA Astrophysics Data System (ADS)
Vetter, Michael; Olbrich, Stephan
2017-04-01
The simulation of climate data tends to produce very large data sets, which hardly can be processed in classical post-processing visualization applications. Typically, the visualization pipeline consisting of the processes data generation, visualization mapping and rendering is distributed into two parts over the network or separated via file transfer. Within most traditional post-processing scenarios the simulation is done on a supercomputer whereas the data analysis and visualization is done on a graphics workstation. That way temporary data sets with huge volume have to be transferred over the network, which leads to bandwidth bottlenecks and volume limitations. The solution to this issue is the avoidance of temporary storage, or at least significant reduction of data complexity. Within the Climate Visualization Lab - as part of the Cluster of Excellence "Integrated Climate System Analysis and Prediction" (CliSAP) at the University of Hamburg, in cooperation with the German Climate Computing Center (DKRZ) - we develop and integrate an in-situ approach. Our software framework DSVR is based on the separation of the process chain between the mapping and the rendering processes. It couples the mapping process directly to the simulation by calling methods of a parallelized data extraction library, which create a time-based sequence of geometric 3D scenes. This sequence is stored on a special streaming server with an interactive post-filtering option and then played-out asynchronously in a separate 3D viewer application. Since the rendering is part of this viewer application, the scenes can be navigated interactively. In contrast to other in-situ approaches where 2D images are created as part of the simulation or synchronous co-visualization takes place, our method supports interaction in 3D space and in time, as well as fixed frame rates. To integrate in-situ processing based on our DSVR framework and methods in the ICON climate model, we are continuously evolving the data structures and mapping algorithms of the framework to support the ICON model's native grid structures, since DSVR originally was designed for rectilinear grids only. We now have implemented a new output module to ICON to take advantage of the DSVR visualization. The visualization can be configured as most output modules by using a specific namelist and is exemplarily integrated within the non-hydrostatic atmospheric model time loop. With the integration of a DSVR based in-situ pathline extraction within ICON, a further milestone is reached. The pathline algorithm as well as the grid data structures have been optimized for the domain decomposition used for the parallelization of ICON based on MPI and OpenMP. The software implementation and evaluation is done on the supercomputers at DKRZ. In principle, the data complexity is reduced from O(n3) to O(m), where n is the grid resolution and m the number of supporting point of all pathlines. The stability and scalability evaluation is done using Atmospheric Model Intercomparison Project (AMIP) runs. We will give a short introduction in our software framework, as well as a short overview on the implementation and usage of DSVR within ICON. Furthermore, we will present visualization and evaluation results of sample applications.
Perceiving Event Dynamics and Parsing Hollywood Films
ERIC Educational Resources Information Center
Cutting, James E.; Brunick, Kaitlin L.; Candan, Ayse
2012-01-01
We selected 24 Hollywood movies released from 1940 through 2010 to serve as a film corpus. Eight viewers, three per film, parsed them into events, which are best termed subscenes. While watching a film a second time, viewers scrolled through frames and recorded the frame number where each event began. Viewers agreed about 90% of the time. We then…
Projection-viewer for microscale aerial photography
Robert C. Aldrich; James von Mosch; Wallace Greentree
1972-01-01
A low-cost projection-viewer has been developed to enlarge portions of microscale aerial photographs. These pictures can be used for interpretation or mapping, or for comparison with existing photographs, maps, and overlays to monitor environmental changes. The projection-viewer can enlarge from 2.5 to 20 times, and can be calibrated so that maps may be drawn with a...
Tracking Online Data with YouTube's Insight Tracking Tool
ERIC Educational Resources Information Center
Kinsey, Joanne
2012-01-01
YouTube users have access to the powerful data collection tool, Insight. Insight allows YouTube content producers to collect data about the number of online views, geographic location of viewers by country, the demographics of the viewers, how a video was discovered, and the attention span of the viewer while watching the video. This article…
On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments
NASA Astrophysics Data System (ADS)
Çöltekin, A.; Lokka, I.; Zahner, M.
2016-06-01
Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.
Coherence of structural visual cues and pictorial gravity paves the way for interceptive actions.
Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco
2011-09-20
Dealing with upside-down objects is difficult and takes time. Among the cues that are critical for defining object orientation, the visible influence of gravity on the object's motion has received limited attention. Here, we manipulated the alignment of visible gravity and structural visual cues between each other and relative to the orientation of the observer and physical gravity. Participants pressed a button triggering a hitter to intercept a target accelerated by a virtual gravity. A factorial design assessed the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). We found that interception was significantly more successful when scene direction was concordant with target gravity direction, irrespective of whether both were upright or inverted. This was so independent of the hitter type and when performance feedback to the participants was either available (Experiment 1) or unavailable (Experiment 2). These results show that the combined influence of visible gravity and structural visual cues can outweigh both physical gravity and viewer-centered cues, leading to rely instead on the congruence of the apparent physical forces acting on people and objects in the scene.
Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis
NASA Astrophysics Data System (ADS)
Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.
2015-08-01
The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.
Subjective experiences of watching stereoscopic Avatar and U2 3D in a cinema
NASA Astrophysics Data System (ADS)
Pölönen, Monika; Salmimaa, Marja; Takatalo, Jari; Häkkinen, Jukka
2012-01-01
A stereoscopic 3-D version of the film Avatar was shown to 85 people who subsequently answered questions related to sickness, visual strain, stereoscopic image quality, and sense of presence. Viewing Avatar for 165 min induced some symptoms of visual strain and sickness, but the symptom levels remained low. A comparison between Avatar and previously published results for the film U2 3D showed that sickness and visual strain levels were similar despite the films' runtimes. The genre of the film had a significant effect on the viewers' opinions and sense of presence. Avatar, which has been described as a combination of action, adventure, and sci-fi genres, was experienced as more immersive and engaging than the music documentary U2 3D. However, participants in both studies were immersed, focused, and absorbed in watching the stereoscopic 3-D (S3-D) film and were pleased with the film environments. The results also showed that previous stereoscopic 3-D experience significantly reduced the amount of reported eye strain and complaints about the weight of the viewing glasses.
RUSSELL, DALE W.; RUSSELL, CRISTEL ANTONIA
2014-01-01
Objective This research investigates whether warning viewers about the presence of embedded messages in the content of a television episode affects viewers' drinking beliefs and whether audi ence connectedness moderates the warning's impact. Method Two hun dred fifty college students participated in a laboratory experiment approximating a real-life television viewing experience. They viewed an actual television series episode containing embedded alcohol messages, and their subsequent beliefs about alcohol consequences were measured. Experimental conditions differed based on a 2 (Connectedness Level: low vs high) × 2 (Timing of the Warning: before or after the episode) × 2 (Emphasis of Warning: advertising vs health message) design. Connectedness was measured, and the timing and emphasis of the warnings were manipulated. The design also included a control condition where there was no warning. Results The findings indicate that warning view ers about embedded messages in the content of a program can yield sig nificant differences in viewers' beliefs about alcohol. However, the warning's impact differs depending on the viewers' level of connectedness to the program. In particular, in comparison with the no-warning control condition, the advertising prewarning produced lower positive beliefs about alcohol and its consequences but only for the low-connected viewers. Highly connected viewers were not affected by a warning emphasizing advertising messages embedded in the program, but a warning emphasizing health produced significantly higher negative be liefs about drinking than in the control condition. Conclusions The presence of many positive portrayals of drinking and alcohol product placements in television series has led many to suggest ways to counter their influence. However, advocates of warnings should be conscious of their differential impact on high- and low-connected viewers. PMID:18432390
NASA Astrophysics Data System (ADS)
Massof, Robert W.; Schmidt, Karen M.; Laby, Daniel M.; Kirschen, David; Meadows, David
2013-09-01
Visual acuity, a forced-choice psychophysical measure of visual spatial resolution, is the sine qua non of clinical visual impairment testing in ophthalmology and optometry patients with visual system disorders ranging from refractive error to retinal, optic nerve, or central visual system pathology. Visual acuity measures are standardized against a norm, but it is well known that visual acuity depends on a variety of stimulus parameters, including contrast and exposure duration. This paper asks if it is possible to estimate a single global visual state measure from visual acuity measures as a function of stimulus parameters that can represent the patient's overall visual health state with a single variable. Psychophysical theory (at the sensory level) and psychometric theory (at the decision level) are merged to identify the conditions that must be satisfied to derive a global visual state measure from parameterised visual acuity measures. A global visual state measurement model is developed and tested with forced-choice visual acuity measures from 116 subjects with no visual impairments and 560 subjects with uncorrected refractive error. The results are in agreement with the expectations of the model.
The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images
NASA Astrophysics Data System (ADS)
Berriman, G. Bruce; Good, J. C.
2017-05-01
The Montage Image Mosaic Engine was designed as a scalable toolkit, written in C for performance and portability across *nix platforms, that assembles FITS images into mosaics. This code is freely available and has been widely used in the astronomy and IT communities for research, product generation, and for developing next-generation cyber-infrastructure. Recently, it has begun finding applicability in the field of visualization. This development has come about because the toolkit design allows easy integration into scalable systems that process data for subsequent visualization in a browser or client. The toolkit it includes a visualization tool suitable for automation and for integration into Python: mViewer creates, with a single command, complex multi-color images overlaid with coordinate displays, labels, and observation footprints, and includes an adaptive image histogram equalization method that preserves the structure of a stretched image over its dynamic range. The Montage toolkit contains functionality originally developed to support the creation and management of mosaics, but which also offers value to visualization: a background rectification algorithm that reveals the faint structure in an image; and tools for creating cutout and downsampled versions of large images. Version 5 of Montage offers support for visualizing data written in HEALPix sky-tessellation scheme, and functionality for processing and organizing images to comply with the TOAST sky-tessellation scheme required for consumption by the World Wide Telescope (WWT). Four online tutorials allow readers to reproduce and extend all the visualizations presented in this paper.
NASA Astrophysics Data System (ADS)
Russell, R. M.; Johnson, R. M.; Gardiner, E. S.; Bergman, J. J.; Genyuk, J.; Henderson, S.
2004-12-01
Interactive visualizations can be powerful tools for helping students, teachers, and the general public comprehend significant features in rich datasets and complex systems. Successful use of such visualizations requires viewers to have, or to acquire, adequate expertise in use of the relevant visualization tools. In many cases, the learning curve associated with competent use of such tools is too steep for casual users, such as members of the lay public browsing science outreach web sites or K-12 students and teachers trying to integrate such tools into their learning about geosciences. "Windows to the Universe" (http://www.windows.ucar.edu) is a large (roughly 6,000 web pages), well-established (first posted online in 1995), and popular (over 5 million visitor sessions and 40 million pages viewed per year) science education web site that covers a very broad range of Earth science and space science topics. The primary audience of the site consists of K-12 students and teachers and the general public. We have developed several interactive visualizations for use on the site in conjunction with text and still image reference materials. One major emphasis in the design of these interactives has been to ensure that casual users can quickly learn how to use the interactive features without becoming frustrated and departing before they were able to appreciate the visualizations displayed. We will demonstrate several of these "user-friendly" interactive visualizations and comment on the design philosophy we have employed in developing them.
The case of the missing visual details: Occlusion and long-term visual memory.
Williams, Carrick C; Burkle, Kyle A
2017-10-01
To investigate the critical information in long-term visual memory representations of objects, we used occlusion to emphasize 1 type of information or another. By occluding 1 solid side of the object (e.g., top 50%) or by occluding 50% of the object with stripes (like a picket fence), we emphasized visible information about the object, processing the visible details in the former and the object's overall form in the latter. On a token discrimination test, surprisingly, memory for solid or stripe occluded objects at either encoding (Experiment 1) or test (Experiment 2) was the same. In contrast, when occluded objects matched at encoding and test (Experiment 3) or when the occlusion shifted, revealing the entire object piecemeal (Experiment 4), memory was better for solid compared with stripe occluded objects, indicating that objects are represented differently in long-term visual memory. Critically, we also found that when the task emphasized remembering exactly what was shown, memory performance in the more detailed solid occlusion condition exceeded that in the stripe condition (Experiment 5). However, when the task emphasized the whole object form, memory was better in the stripe condition (Experiment 6) than in the solid condition. We argue that long-term visual memory can represent objects flexibly, and task demands can interact with visual information, allowing the viewer to cope with changing real-world visual environments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Television, computer and portable display device use by people with central vision impairment
Woods, Russell L; Satgunam, PremNandhini
2011-01-01
Purpose To survey the viewing experience (e.g. hours watched, difficulty) and viewing metrics (e.g. distance viewed, display size) for television (TV), computers and portable visual display devices for normally-sighted (NS) and visually impaired participants. This information may guide visual rehabilitation. Methods Survey was administered either in person or in a telephone interview on 223 participants of whom 104 had low vision (LV, worse than 6/18, age 22 to 90y, 54 males), and 94 were NS (visual acuity 6/9 or better, age 20 to 86y, 50 males). Depending on their situation, NS participants answered up to 38 questions and LV participants answered up to a further 10 questions. Results Many LV participants reported at least “some” difficulty watching TV (71/103), reported at least “often” having difficulty with computer displays (40/76) and extreme difficulty watching videos on handheld devices (11/16). The average daily TV viewing was slightly, but not significantly, higher for the LV participants (3.6h) than the NS (3.0h). Only 18% of LV participants used visual aids (all optical) to watch TV. Most LV participants obtained effective magnification from a reduced viewing distance for both TV and computer display. Younger LV participants also used a larger display when compared to older LV participants to obtain increased magnification. About half of the TV viewing time occurred in the absence of a companion for both the LV and the NS participants. The mean number of TVs at home reported by LV participants (2.2) was slightly but not significantly (p=0.09) higher than NS participants (2.0). LV participants were equally likely to have a computer but were significantly (p=0.004) less likely to access the internet (73/104) compared to NS participants (82/94). Most LV participants expressed an interest in image enhancing technology for TV viewing (67/104) and for computer use (50/74), if they used a computer. Conclusion In this study, both NS and LV participants had comparable video viewing habits. Most LV participants in our sample reported difficulty watching TV, and indicated an interest in assistive technology, such as image enhancement. As our participants reported that at least half their video viewing hours are spent alone and that there is usually more than one TV per household, this suggests that there are opportunities to use image enhancement on the TVs of LV viewers without interfering with the viewing experience of NS viewers. PMID:21410501
Lesbian (in)visibility in Italian Renaissance culture: Diana and other cases of donna con donna.
Simons, P
1994-01-01
Current conceptualizations of sexual identity in the West are not necessarily useful to an historian investigating "lesbianism" in the social history and visual representations of different periods. After an overview of Renaissance documents treating donna con donna relations which examines the potentially positive effects of condemnation and silence, the paper focuses on Diana, the goddess of chastity, who bathed with her nymphs as an exemplar of female bodies preserved for heterosexual, reproductive pleasures. Yet the self-sufficiency and bodily contact sometimes represented in images of this secluded all-female gathering might suggest "deviant" responses from their viewers.
Emotions Bias Perceptions of Realism in Audiovisual Media: Why We May Take Fiction for Real
ERIC Educational Resources Information Center
Konijn, Elly A.; Walma van der Molen, Juliette H.; van Nes, Sander
2009-01-01
This study investigated whether emotions induced in TV-viewers (either as an emotional state or co-occurring with emotional involvement) would increase viewers' perception of realism in a fake documentary and affect the information value that viewers would attribute to its content. To that end, two experiments were conducted that manipulated (a)…
ERIC Educational Resources Information Center
Leckenby, John D.; Surlin, Stuart H.
The nature of incidental social learning in television viewers of "All in the Family" and "Sanford and Son" was the focus of this investigation. Seven hundred and eight-one racially and economically mixed respondents from Chicago and Atlanta provided the data source. Telephone interviews attempted to assess viewer opinions of…
Molmil: a molecular viewer for the PDB and beyond.
Bekker, Gert-Jan; Nakamura, Haruki; Kinjo, Akira R
2016-01-01
We have developed a new platform-independent web-based molecular viewer using JavaScript and WebGL. The molecular viewer, Molmil, has been integrated into several services offered by Protein Data Bank Japan and can be easily extended with new functionality by third party developers. Furthermore, the viewer can be used to load files in various formats from the user's local hard drive without uploading the data to a server. Molmil is available for all platforms supporting WebGL (e.g. Windows, Linux, iOS, Android) from http://gjbekker.github.io/molmil/. The source code is available at http://github.com/gjbekker/molmil under the LGPLv3 licence.
PACS viewer interoperability for teleconsultation based on DICOM
NASA Astrophysics Data System (ADS)
Salant, Eliot; Shani, Uri
2000-05-01
Real-time teleconsultation in radiology enables physicians to perform same-time consultation between remote peers, based on medical images. Since digital medical images are commonly viewed on PACS workstations, it is possible to use one of several methods for remote sharing of the computer screen. For instance, software products such as Microsoft NetMeeting, or IBM SameTime, can be used. However, the amount of image data transmitted can be very high, since even minute changes in an image window/level requires re-transmitting the entire image again and again. This is too inefficient. Looking for better methods, when restricting the problem to the use of same hardware and software of the same vendor, it is easier to develop a solution that employs a proprietary specialized protocol to coordinate the visualization process. Such is a solution that we developed, and which demonstrated an excellent performance advantage by transmitting only the graphical events between the machines, rather than the image pixels. Our solution did not inter-operate with other viewers. It worked only on X11/Motif systems, and only between compatible versions of the same viewer application. Our purpose in this paper is to enable inter-operability between viewers of different platforms, and different vendors. We distinguish three parts: Session control, audiovisual (multimedia) data exchange, and medical image sharing. We intend to deal only with the third component, assuming the use of existing standards for the first two parts. After a session between two or more parties is established, and optional audiovisual data channels are set, the medical consultation is considered as the coordinated exchange of medical image contents. Some requirements for the contents exchange protocol: In the first stage, the parties negotiate the actual set of capabilities to be used during the consultation, using a formal description of these capabilities. The capabilities that one station lacks over the other (such as specific image processing algorithms) can be 'borrowed.' In the second stage, when interaction starts, it should assume that the graphical user interface of the stations might be different, as well as working procedures. During the consultation, data is exchanged based on DICOM for the data model of medical image folders, and the data format of image objects.
Development of a Unique Web2.0 Interface for Global Collaboration in Land Cover Change Research
NASA Astrophysics Data System (ADS)
Dunham, M.; Boriah, S.; Mithal, V.; Garg, A.; Steinbach, M.; Kumar, V.; Potter, C. S.; Klooster, S.; Castilla-Rubio, J.
2010-12-01
The ability to detect changes in forest cover is of critical importance for both economic and scientific reasons, e.g. using forests for economic carbon sink management and studying natural and anthropogenic impacts on ecosystems. The contribution of greenhouse gases from deforestation is one of the most uncertain elements of the global carbon cycle. In fact, changes in forests account for as much as 20% of the greenhouse gas emissions in the atmosphere, an amount second only to fossil fuel emissions. Thus, a key ingredient for effective forest management, whether for carbon trading or other purposes, is quantifiable knowledge about changes in forest cover. Rich amounts of data from remotely-sensed images are now becoming available for detecting changes in forests or more generally, land cover. However, in spite of the importance of this problem and the considerable advances made over the last few years in high-resolution satellite data acquisition, data mining, and online mapping tools and services, end users still lack practical tools to help them manage and transform this data into actionable knowledge of changes in forest ecosystems that can be used for decision making and policy planning purposes. We have developed innovations in a number of technical areas with the goal of providing actionable knowledge to end users: (i) identification of changes in global forest cover, (ii) characterization of those changes, (iii) discovery of relationships between the number, magnitude, and type of these changes with natural and anthropogenic variables, and (iv) a web-based platform that supports interactive visualization of disturbances and relationships. The focus of this abstract is on the interactive web-based platform. This key component of the project is a graphical user interface built on the Flash framework. The viewer is a groundbreaking, multi-purpose application used for everything from algorithm refinement and data analysis for the team to a demonstration platform for research partners. The team continues to develop the utility to allow for worldwide researcher and community contributions with the hopes of enhancing global understanding of environmental change.
Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.
2012-01-01
Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…
Utilizing Light-field Imaging Technology in Neurosurgery.
Chen, Brian R; Buchanan, Ian A; Kellis, Spencer; Kramer, Daniel; Ohiorhenuan, Ifije; Blumenfeld, Zack; Grisafe Ii, Dominic J; Barbaro, Michael F; Gogia, Angad S; Lu, James Y; Chen, Beverly B; Lee, Brian
2018-04-10
Traditional still cameras can only focus on a single plane for each image while rendering everything outside of that plane out of focus. However, new light-field imaging technology makes it possible to adjust the focus plane after an image has already been captured. This technology allows the viewer to interactively explore an image with objects and anatomy at varying depths and clearly focus on any feature of interest by selecting that location during post-capture viewing. These images with adjustable focus can serve as valuable educational tools for neurosurgical residents. We explore the utility of light-field cameras and review their strengths and limitations compared to other conventional types of imaging. The strength of light-field images is the adjustable focus, as opposed to the fixed-focus of traditional photography and video. A light-field image also is interactive by nature, as it requires the viewer to select the plane of focus and helps with visualizing the three-dimensional anatomy of an image. Limitations include the relatively low resolution of light-field images compared to traditional photography and video. Although light-field imaging is still in its infancy, there are several potential uses for the technology to complement traditional still photography and videography in neurosurgical education.
Utilizing Light-field Imaging Technology in Neurosurgery
Chen, Brian R; Kellis, Spencer; Kramer, Daniel; Ohiorhenuan, Ifije; Blumenfeld, Zack; Grisafe II, Dominic J; Barbaro, Michael F; Gogia, Angad S; Lu, James Y; Chen, Beverly B; Lee, Brian
2018-01-01
Traditional still cameras can only focus on a single plane for each image while rendering everything outside of that plane out of focus. However, new light-field imaging technology makes it possible to adjust the focus plane after an image has already been captured. This technology allows the viewer to interactively explore an image with objects and anatomy at varying depths and clearly focus on any feature of interest by selecting that location during post-capture viewing. These images with adjustable focus can serve as valuable educational tools for neurosurgical residents. We explore the utility of light-field cameras and review their strengths and limitations compared to other conventional types of imaging. The strength of light-field images is the adjustable focus, as opposed to the fixed-focus of traditional photography and video. A light-field image also is interactive by nature, as it requires the viewer to select the plane of focus and helps with visualizing the three-dimensional anatomy of an image. Limitations include the relatively low resolution of light-field images compared to traditional photography and video. Although light-field imaging is still in its infancy, there are several potential uses for the technology to complement traditional still photography and videography in neurosurgical education. PMID:29888163
GDR (Genome Database for Rosaceae): integrated web-database for Rosaceae genomics and genetics data
Jung, Sook; Staton, Margaret; Lee, Taein; Blenda, Anna; Svancara, Randall; Abbott, Albert; Main, Dorrie
2008-01-01
The Genome Database for Rosaceae (GDR) is a central repository of curated and integrated genetics and genomics data of Rosaceae, an economically important family which includes apple, cherry, peach, pear, raspberry, rose and strawberry. GDR contains annotated databases of all publicly available Rosaceae ESTs, the genetically anchored peach physical map, Rosaceae genetic maps and comprehensively annotated markers and traits. The ESTs are assembled to produce unigene sets of each genus and the entire Rosaceae. Other annotations include putative function, microsatellites, open reading frames, single nucleotide polymorphisms, gene ontology terms and anchored map position where applicable. Most of the published Rosaceae genetic maps can be viewed and compared through CMap, the comparative map viewer. The peach physical map can be viewed using WebFPC/WebChrom, and also through our integrated GDR map viewer, which serves as a portal to the combined genetic, transcriptome and physical mapping information. ESTs, BACs, markers and traits can be queried by various categories and the search result sites are linked to the mapping visualization tools. GDR also provides online analysis tools such as a batch BLAST/FASTA server for the GDR datasets, a sequence assembly server and microsatellite and primer detection tools. GDR is available at http://www.rosaceae.org. PMID:17932055
Colour in flux: describing and printing colour in art
NASA Astrophysics Data System (ADS)
Parraman, Carinna
2008-01-01
This presentation will describe artists, practitioners and scientists, who were interested in developing a deeper psychological, emotional and practical understanding of the human visual system who were working with wavelength, paint and other materials. From a selection of prints at The Prints and Drawings Department at Tate London, the presentation will refer to artists who were motivated by issues relating to how colour pigment was mixed and printed, to interrogate and explain colour perception and colour science, and in art, how artists have used colour to challenge the viewer and how a viewer might describe their experience of colour. The title Colour in Flux refers, not only to the perceptual effect of the juxtaposition of one colour pigment with another, but also to the changes and challenges for the print industry. In the light of screenprinted examples from the 60s and 70s, the presentation will discuss 21 st century ideas on colour and how these notions have informed the Centre for Fine Print Research's (CFPR) practical research in colour printing. The latter part of this presentation will discuss the implications for the need to change methods in mixing inks that moves away from existing colour spaces, from non intuitive colour mixing to bespoke ink sets, colour mixing approaches and colour mixing methods that are not reliant on RGB or CMYK.
The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization
NASA Astrophysics Data System (ADS)
Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.
2003-12-01
The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.
An optimized web-based approach for collaborative stereoscopic medical visualization
Kaspar, Mathias; Parsad, Nigel M; Silverstein, Jonathan C
2013-01-01
Objective Medical visualization tools have traditionally been constrained to tethered imaging workstations or proprietary client viewers, typically part of hospital radiology systems. To improve accessibility to real-time, remote, interactive, stereoscopic visualization and to enable collaboration among multiple viewing locations, we developed an open source approach requiring only a standard web browser with no added client-side software. Materials and Methods Our collaborative, web-based, stereoscopic, visualization system, CoWebViz, has been used successfully for the past 2 years at the University of Chicago to teach immersive virtual anatomy classes. It is a server application that streams server-side visualization applications to client front-ends, comprised solely of a standard web browser with no added software. Results We describe optimization considerations, usability, and performance results, which make CoWebViz practical for broad clinical use. We clarify technical advances including: enhanced threaded architecture, optimized visualization distribution algorithms, a wide range of supported stereoscopic presentation technologies, and the salient theoretical and empirical network parameters that affect our web-based visualization approach. Discussion The implementations demonstrate usability and performance benefits of a simple web-based approach for complex clinical visualization scenarios. Using this approach overcomes technical challenges that require third-party web browser plug-ins, resulting in the most lightweight client. Conclusions Compared to special software and hardware deployments, unmodified web browsers enhance remote user accessibility to interactive medical visualization. Whereas local hardware and software deployments may provide better interactivity than remote applications, our implementation demonstrates that a simplified, stable, client approach using standard web browsers is sufficient for high quality three-dimensional, stereoscopic, collaborative and interactive visualization. PMID:23048008
Tips for Better Visual Elements in Posters and Podium Presentations
Zerwic, JJ; Grandfield, K; Kavanaugh, K; Berger, B; Graham, L; Mershon, M
2010-01-01
Context The ability to effectively communicate through posters and podium presentations using appropriate visual content and style is essential for health care educators. Objectives To offer suggestions for more effective visual elements of posters and podium presentations. Methods We present the experiences of our multidisciplinary publishing group, whose combined experiences and collaboration have provided us with an understanding of what works and how to achieve success when working on presentations and posters. Many others would offer similar advice, as these guidelines are consistent with effective presentation. Findings/Suggestions Certain visual elements should be attended to in any visual presentation: consistency, alignment, contrast and repetition. Presentations should be consistent in font size and type, line spacing, alignment of graphics and text, and size of graphics. All elements should be aligned with at least one other element. Contrasting light background with dark text (and vice versa) helps an audience read the text more easily. Standardized formatting lets viewers know when they are looking at similar things (tables, headings, etc.). Using a minimal number of colors (four at most) helps the audience more easily read text. For podium presentations, have one slide for each minute allotted for speaking. The speaker is also a visual element; one should not allow the audience’s view of either the presentation or presenter to be blocked. Making eye contact with the audience also keeps them visually engaged. Conclusions Health care educators often share information through posters and podium presentations. These tips should help the visual elements of presentations be more effective. PMID:20853236
Psychological and neural responses to art embody viewer and artwork histories.
Vartanian, Oshin; Kaufman, James C
2013-04-01
The research programs of empirical aesthetics and neuroaesthetics have reflected deep concerns about viewers' sensitivities to artworks' historical contexts by investigating the impact of two factors on art perception: viewers' developmental (and educational) histories and the contextual histories of artworks. These considerations are consistent with data demonstrating that art perception is underwritten by dynamically reconfigured and evolutionarily adapted neural and psychological mechanisms.
Learning a Continuous-Time Streaming Video QoE Model.
Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C
2018-05-01
Over-the-top adaptive video streaming services are frequently impacted by fluctuating network conditions that can lead to rebuffering events (stalling events) and sudden bitrate changes. These events visually impact video consumers' quality of experience (QoE) and can lead to consumer churn. The development of models that can accurately predict viewers' instantaneous subjective QoE under such volatile network conditions could potentially enable the more efficient design of quality-control protocols for media-driven services, such as YouTube, Amazon, Netflix, and so on. However, most existing models only predict a single overall QoE score on a given video and are based on simple global video features, without accounting for relevant aspects of human perception and behavior. We have created a QoE evaluator, called the time-varying QoE Indexer, that accounts for interactions between stalling events, analyzes the spatial and temporal content of a video, predicts the perceptual video quality, models the state of the client-side data buffer, and consequently predicts continuous-time quality scores that agree quite well with human opinion scores. The new QoE predictor also embeds the impact of relevant human cognitive factors, such as memory and recency, and their complex interactions with the video content being viewed. We evaluated the proposed model on three different video databases and attained standout QoE prediction performance.
Seaside, Oregon, Tsunami Pilot Study-Modernization of FEMA Flood Hazard Maps: GIS Data
Wong, Florence L.; Venturato, Angie J.; Geist, Eric L.
2006-01-01
Introduction: The Federal Emergency Management Agency (FEMA) Federal Insurance Rate Map (FIRM) guidelines do not currently exist for conducting and incorporating tsunami hazard assessments that reflect the substantial advances in tsunami research achieved in the last two decades; this conclusion is the result of two FEMA-sponsored workshops and the associated Tsunami Focused Study (Chowdhury and others, 2005). Therefore, as part of FEMA's Map Modernization Program, a Tsunami Pilot Study was carried out in the Seaside/Gearhart, Oregon, area to develop an improved Probabilistic Tsunami Hazard Analysis (PTHA) methodology and to provide recommendations for improved tsunami hazard assessment guidelines (Tsunami Pilot Study Working Group, 2006). The Seaside area was chosen because it is typical of many coastal communities in the section of the Pacific Coast from Cape Mendocino to the Strait of Juan de Fuca, and because State agencies and local stakeholders expressed considerable interest in mapping the tsunami threat to this area. The study was an interagency effort by FEMA, U.S. Geological Survey, and the National Oceanic and Atmospheric Administration (NOAA), in collaboration with the University of Southern California, Middle East Technical University, Portland State University, Horning Geoscience, Northwest Hydraulics Consultants, and the Oregon Department of Geological and Mineral Industries. We present the spatial (geographic information system, GIS) data from the pilot study in standard GIS formats and provide files for visualization in Google Earth, a global map viewer.
NASA Astrophysics Data System (ADS)
Pedersen, T. F.; Zwiers, F. W.; Breen, C.; Murdock, T. Q.
2014-12-01
The Pacific Institute for Climate Solutions (PICS) has now made available online three free, peer-reviewed, unique animated short courses in a series entitled "Climate Insights 101" that respectively address basic climate science, carbon-emissions mitigation approaches and opportunities, and adaptation. The courses are suitable for students of all ages, and use professionally narrated animations designed to hold a viewer's attention. Multiple issues are covered, including complex concerns like the construction of general circulation models, carbon pricing schemes in various countries, and adaptation approaches in the face of extreme weather events. Clips will be shown in the presentation. The first course (Climate Science Basics) has now been seen by over two hundred thousand individuals in over 80 countries, despite being offered in English only. Each course takes about two hours to work through, and in recognizing that that duration might pose an attention barrier to some students, PICS selected a number of short clips from the climate-science course and posted them as independent snippets on YouTube. A companion series of YouTube videos entitled, "Clear The Air", was created to confront the major global-warming denier myths. But a major challenge remains: despite numerous efforts to promote the availability of the free courses and the shorter YouTube pieces, they have yet to become widely known. Strategies to overcome that constraint will be discussed.
Roth, Christopher J; Lannum, Louis M; Dennison, Donald K; Towbin, Alexander J
2016-10-01
Clinical specialties have widely varied needs for diagnostic image interpretation, and clinical image and video image consumption. Enterprise viewers are being deployed as part of electronic health record implementations to present the broad spectrum of clinical imaging and multimedia content created in routine medical practice today. This white paper will describe the enterprise viewer use cases, drivers of recent growth, technical considerations, functionality differences between enterprise and specialty viewers, and likely future states. This white paper is aimed at CMIOs and CIOs interested in optimizing the image-enablement of their electronic health record or those who may be struggling with the many clinical image viewers their enterprises may employ today.
Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.
Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A
2014-08-01
The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.
Integrating text and pictorial information: eye movements when looking at print advertisements.
Rayner, K; Rotello, C M; Stewart, A J; Keir, J; Duffy, S A
2001-09-01
Viewers looked at print advertisements as their eye movements were recorded. Half of them were told to pay special attention to car ads, and the other half were told to pay special attention to skin-care ads. Viewers tended to spend more time looking at the text than the picture part of the ad, though they did spend more time looking at the type of ad they were instructed to pay attention to. Fixation durations and saccade lengths were both longer on the picture part of the ad than the text, but more fixations were made on the text regions. Viewers did not alternate fixations between the text and picture part of the ad, but they tended to read the large print, then the smaller print, and then they looked at the picture (although some viewers did an initial cursory scan of the picture). Implications for (a) how viewers integrate pictorial and textual information and (b) applied research and advertisement development are discussed.
The notion of the motion: the neurocognition of motion lines in visual narratives.
Cohn, Neil; Maher, Stephen
2015-03-19
Motion lines appear ubiquitously in graphic representation to depict the path of a moving object, most popularly in comics. Some researchers have argued that these graphic signs directly tie to the "streaks" appearing in the visual system when a viewer tracks an object (Burr, 2000), despite the fact that previous studies have been limited to offline measurements. Here, we directly examine the cognition of motion lines by comparing images in comic strips that depicted normal motion lines with those that either had no lines or anomalous, reversed lines. In Experiment 1, shorter viewing times appeared to images with normal lines than those with no lines, which were shorter than those with anomalous lines. In Experiment 2, measurements of event-related potentials (ERPs) showed that, compared to normal lines, panels with no lines elicited a posterior positivity that was distinct from the frontal positivity evoked by anomalous lines. These results suggested that motion lines aid in the comprehension of depicted events. LORETA source localization implicated greater activation of visual and language areas when understanding was made more difficult by anomalous lines. Furthermore, in both experiments, participants' experience reading comics modulated these effects, suggesting motion lines are not tied to aspects of the visual system, but rather are conventionalized parts of the "vocabulary" of the visual language of comics. Copyright © 2015 Elsevier B.V. All rights reserved.
Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment
Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905
Designing for Broad Understanding of Science.
Hristov, Nickolay; Strohecker, Carol; Allen, Louise; Merson, Martha
2018-06-04
With the acceleration and increasing complexity of macro-scale problems such as climate change, the need for scientists to ensure that their work is understood has become urgent. As citizens and recipients of public funds for research, scientists have an obligation to communicate their findings in ways many people can understand. However, developing translations that are broadly accessible without being "dumbed down" can be challenging. Fortunately, tenets of visual literacy, combined with narrative methods, can help to convey scientific knowledge with fidelity, while sustaining viewers' interest. Here we outline strategies for such translating, with an emphasis on visual approaches. Among the examples is an innovative, NSF-funded professional development initiative in which National Park rangers use scientists' imagery to create compelling explanations for the visiting public. Thoughtful visualizations based on interpretive images, motion pictures, 3D animations and augmented, immersive experiences complement the impact of the natural resource and enhance the role of the park ranger. The visualizations become scaffolds for participatory exchanges in which the ranger transcends the traditional roles of information-holder and presenter, to facilitate provocative conversations that provide members of the public with enjoyable experiences and well-founded bases for reflection and ultimately understanding. The process of generating the supporting visualizations benefits from partnerships with design professionals, who develop opportunities for engaging the public by translating important scientific findings and messages in compelling and memorable ways.
The notion of the motion: The neurocognition of motion lines in visual narratives
Cohn, Neil; Maher, Stephen
2015-01-01
Motion lines appear ubiquitously in graphic representation to depict the path of a moving object, most popularly in comics. Some researchers have argued that these graphic signs directly tie to the “streaks” appearing in the visual system when a viewer tracks an object (Burr, 2000), despite the fact that previous studies have been limited to offline measurements. Here, we directly examine the cognition of motion lines by comparing images in comic strips that depicted normal motion lines with those that either had no lines or anomalous, reversed lines. In Experiment 1, shorter viewing times appeared to images with normal lines than those with no lines, which were shorter than those with anomalous lines. In Experiment 2, measurements of event-related potentials (ERPs) showed that, compared to normal lines, panels with no lines elicited a posterior positivity that was distinct from the frontal positivity evoked by anomalous lines. These results suggested that motion lines aid in the comprehension of depicted events. LORETA source localization implicated greater activation of visual and language areas when understanding was made more difficult by anomalous lines. Furthermore, in both experiments, participants' experience reading comics modulated these effects, suggesting motion lines are not tied to aspects of the visual system, but rather are conventionalized parts of the “vocabulary” of the visual language of comics. PMID:25601006
Great bowerbirds create theaters with forced perspective when seen by their audience.
Endler, John A; Endler, Lorna C; Doerr, Natalie R
2010-09-28
Birds in the infraorder Corvida [1] (ravens, jays, bowerbirds) are renowned for their cognitive abilities [2-4], which include advanced problem solving with spatial inference [4-8], tool use and complex constructions [7-10], and bowerbird cognitive ability is associated with mating success [11]. Great bowerbird males construct bowers with a long avenue from within which females view the male displaying over his bower court [10]. This predictable audience viewpoint is a prerequisite for forced (altered) visual perspective [12-14]. Males make courts with gray and white objects that increase in size with distance from the avenue entrance. This gradient creates forced visual perspective for the audience; court object visual angles subtended on the female viewer's eye are more uniform than if the objects were placed at random. Forced perspective can yield false perception of size and distance [12, 15]. After experimental reversal of their size-distance gradient, males recovered their gradients within 3 days, and there was little difference from the original after 2 wks. Variation among males in their forced-perspective quality as seen by their female audience indicates that visual perspective is available for use in mate choice, perhaps as an indicator of cognitive ability. Regardless of function, the creation and maintenance of forced visual perspective is clearly important to great bowerbirds and suggests the possibility of a previously unknown dimension of bird cognition. Copyright © 2010 Elsevier Ltd. All rights reserved.
Efficiently, Effectively Detecting Mobile App Bugs with AppDoctor
2014-04-01
1 ∼ 5 ACV Comic Viewer 2 1 ∼ 5 Yes OpenSudoku 1 1 ∼ 5 Yes OI Notepad 1 0.1 ∼ 0.5 Yes OI Safe 1 0.1 ∼ 0.5 Yes Table 1: Each app’s bug count. First row... Comic Viewer Incorrect assumption of the presence of Google Services caused a crash Reported 6 ACV Comic Viewer Failed to check for the failure of
Interactions among Collective Spectators Facilitate Eyeblink Synchronization
Nomura, Ryota; Liang, Yingzong; Okada, Takeshi
2015-01-01
Whereas the entrainment of movements and aspirations among audience members has been known as a basis of collective excitement in the theater, the role of the entrainment of cognitive processes among audience members is still unclear. In the current study, temporal patterns of the audience’s attention were observed using eyeblink responses. To determine the effect of interactions among audience members on cognitive entrainment, as well as its direction (attractive or repulsive), the eyeblink synchronization of the following two groups were compared: (1) the experimental condition, where the audience members (seven frequent viewers and seven first-time viewers) viewed live performances in situ, and (2) the control condition, where the audience members (15 frequent viewers and 15 first-time viewers) viewed videotaped performances in individual experimental settings (results reported in previous study.) The results of this study demonstrated that the mean values of a measure of asynchrony (i.e., D interval) were much lower for the experimental condition than for the control condition. Frequent viewers had a moderate attractive effect that increased as the story progressed, while a strong attractive effect was observed throughout the story for first-time viewers. The attractive effect of interactions among a group of spectators was discussed from the viewpoint of cognitive and somatic entrainment in live performances. PMID:26479405
Viewers can keep up with fast subtitles: Evidence from eye movements.
Szarkowska, Agnieszka; Gerber-Morón, Olivia
2018-01-01
People watch subtitled audiovisual materials more than ever before. With the proliferation of subtitled content, we are also witnessing an increase in subtitle speeds. However, there is an ongoing controversy about what optimum subtitle speeds should be. This study looks into whether viewers can keep up with increasingly fast subtitles and whether the way people cope with subtitled content depends on their familiarity with subtitling and on their knowledge of the language of the film soundtrack. We tested 74 English, Polish and Spanish viewers watching films subtitled at different speeds (12, 16 and 20 characters per second). The films were either in Hungarian, a language unknown to the participants (Experiment 1), or in English (Experiment 2). We measured viewers' comprehension, self-reported cognitive load, scene and subtitle recognition, preferences and enjoyment. By analyzing people's eye gaze, we were able to discover that most viewers could read the subtitles as well as follow the images, coping well even with fast subtitle speeds. Slow subtitles triggered more re-reading, particularly in English clips, causing more frustration and less enjoyment. Faster subtitles with unreduced text were preferred in the case of English videos, and slower subtitles with text edited down in Hungarian videos. The results provide empirical grounds for revisiting current subtitling practices to enable more efficient processing of subtitled videos for viewers.
Ostroff, Joshua; Jernigan, David H.
2016-01-01
Underage alcohol use is a global public health problem and alcohol advertising has been associated with underage drinking. The alcohol industry regulates itself and is the primary control on alcohol advertising in many countries around the world, advising trade association members to advertise only in adult-oriented media. Despite high levels of compliance with these self-regulatory guidelines, in several countries youth exposure to alcohol advertising on television has grown faster than adult exposure. In the United States, we found that exposure for underage viewers ages 18–20 grew from 2005 through 2011 faster than any adult age group. Applying a method adopted from a court in the US to identify underage targeting of advertising, we found evidence of targeting of alcohol advertising to underage viewers ages 18–20. The court's rule appeared in Lockyer v. Reynolds (The People ex rel. Bill Lockyer v. R.J. Reynolds Tobacco Company, GIC764118, 2002). We demonstrated that alcohol companies were able to modify their advertising practices to maintain current levels of adult advertising exposure while reducing youth exposure. PMID:24424494
Visual perception of fatigued lifting actions.
Fischer, Steven L; Albert, Wayne J; McGarry, Tim
2012-12-01
Fatigue-related changes in lifting kinematics may expose workers to undue injury risks. Early detection of accumulating fatigue offers the prospect of intervention strategies to mitigate such fatigue-related risks. In a first step towards this objective, this study investigated whether fatigue detection was accessible to visual perception and, if so, what was the key visual information required for successful fatigue discrimination. Eighteen participants were tasked with identifying fatigued lifts when viewing 24 trials presented using both video and point-light representations. Each trial comprised a pair of lifting actions containing a fresh and a fatigued lift from the same individual presented in counter-balanced sequence. Confidence intervals demonstrated that the frequency of correct responses for both sexes exceeded chance expectations (50%) for both video (68%±12%) and point-light representations (67%±10%), demonstrating that fatigued lifting kinematics are open to visual perception. There were no significant differences between sexes or viewing condition, the latter result indicating kinematic dynamics as providing sufficient information for successful fatigue discrimination. Moreover, results from single viewer investigation reported fatigue detection (75%) from point-light information describing only the kinematics of the box lifted. These preliminary findings may have important workplace applications if fatigue discrimination rates can be improved upon through future research. Copyright © 2012 Elsevier B.V. All rights reserved.
Rocinante, a virtual collaborative visualizer
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, M.J.; Ice, L.G.
1996-12-31
With the goal of improving the ability of people around the world to share the development and use of intelligent systems, Sandia National Laboratories` Intelligent Systems and Robotics Center is developing new Virtual Collaborative Engineering (VCE) and Virtual Collaborative Control (VCC) technologies. A key area of VCE and VCC research is in shared visualization of virtual environments. This paper describes a Virtual Collaborative Visualizer (VCV), named Rocinante, that Sandia developed for VCE and VCC applications. Rocinante allows multiple participants to simultaneously view dynamic geometrically-defined environments. Each viewer can exclude extraneous detail or include additional information in the scene as desired.more » Shared information can be saved and later replayed in a stand-alone mode. Rocinante automatically scales visualization requirements with computer system capabilities. Models with 30,000 polygons and 4 Megabytes of texture display at 12 to 15 frames per second (fps) on an SGI Onyx and at 3 to 8 fps (without texture) on Indigo 2 Extreme computers. In its networked mode, Rocinante synchronizes its local geometric model with remote simulators and sensory systems by monitoring data transmitted through UDP packets. Rocinante`s scalability and performance make it an ideal VCC tool. Users throughout the country can monitor robot motions and the thinking behind their motion planners and simulators.« less
A GeoWall with Physics and Astronomy Applications
NASA Astrophysics Data System (ADS)
Dukes, Phillip; Bruton, Dan
2008-03-01
A GeoWall is a passive stereoscopic projection system that can be used by students, teachers, and researchers for visualization of the structure and dynamics of three-dimensional systems and data. The type of system described here adequately provides 3-D visualization in natural color for large or small groups of viewers. The name ``GeoWall'' derives from its initial development to visualize data in the geosciences.1 An early GeoWall system was developed by Paul Morin at the electronic visualization laboratory at the University of Minnesota and was applied in an introductory geology course in spring of 2001. Since that time, several stereoscopic media, which are applicable to introductory-level physics and astronomy classes, have been developed and released into the public domain. In addition to the GeoWall's application in the classroom, there is considerable value in its use as part of a general science outreach program. In this paper we briefly describe the theory of operation of stereoscopic projection and the basic necessary components of a GeoWall system. Then we briefly describe how we are using a GeoWall as an instructional tool for the classroom and informal astronomy education and in research. Finally, we list sources for several of the free software media in physics and astronomy available for use with a GeoWall system.
Invariant visual object recognition and shape processing in rats
Zoccolan, Davide
2015-01-01
Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision. PMID:25561421
Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis
2009-12-01
We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.
Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan
2012-01-01
Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578
Roldan, Stephanie M
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.
Roldan, Stephanie M.
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation. PMID:28588538
Visualization and dissemination of global crustal models on virtual globes
NASA Astrophysics Data System (ADS)
Zhu, Liang-feng; Pan, Xin; Sun, Jian-zhong
2016-05-01
Global crustal models, such as CRUST 5.1 and its descendants, are very useful in a broad range of geoscience applications. The current method for representing the existing global crustal models relies heavily on dedicated computer programs to read and work with those models. Therefore, it is not suited to visualize and disseminate global crustal information to non-geological users. This shortcoming is becoming obvious as more and more people from both academic and non-academic institutions are interested in understanding the structure and composition of the crust. There is a pressing need to provide a modern, universal and user-friendly method to represent and visualize the existing global crustal models. In this paper, we present a systematic framework to easily visualize and disseminate the global crustal structure on virtual globes. Based on crustal information exported from the existing global crustal models, we first create a variety of KML-formatted crustal models with different levels of detail (LODs). And then the KML-formatted models can be loaded into a virtual globe for 3D visualization and model dissemination. A Keyhole Markup Language (KML) generator (Crust2KML) is developed to automatically convert crustal information obtained from the CRUST 1.0 model into KML-formatted global crustal models, and a web application (VisualCrust) is designed to disseminate and visualize those models over the Internet. The presented framework and associated implementations can be conveniently exported to other applications to support visualizing and analyzing the Earth's internal structure on both regional and global scales in a 3D virtual-globe environment.
Generation of High-Resolution Geo-referenced Photo-Mosaics From Navigation Data
NASA Astrophysics Data System (ADS)
Delaunoy, O.; Elibol, A.; Garcia, R.; Escartin, J.; Fornari, D.; Humphris, S.
2006-12-01
Optical images of the ocean floor are a rich source of data to understand biological and geological processes. However, due to the attenuation of light in sea water, the area covered by the optical systems is very reduced, and a large number of images are then needed in order to cover an area of interest, as individually they do not provide a global view of the surveyed area. Therefore, generating a composite view (or photo-mosaic) from multiple overlapping images is usually the most practical and flexible solution to visually cover a wide area, allowing the analysis of the site in one single representation of the ocean floor. In most of the camera surveys which are carried out nowadays, some sort of positioning information is available (e.g., USBL, DVL, INS, gyros, etc). If it is a towed camera an estimation of the length of the tether and the mother ship GPS reading could also serve as navigation data. In any case, a photo-mosaic can be build just by taking into account the position and orientation of the camera. On the other hand, most of the regions of interest for the scientific community are quite large (>1Km2) and since better resolution is always required, the final photo-mosaic can be very large (>1,000,000 × 1,000,000 pixels), and cannot be handled by commonly available software. For this reason, we have developed a software package able to load a navigation file and the sequence of acquired images to automatically build a geo-referenced mosaic. This navigated mosaic provides a global view of the interest site, at the maximum available resolution. The developed package includes a viewer, allowing the user to load, view and annotate these geo-referenced photo-mosaics on a personal computer. A software library has been developed to allow the viewer to manage such very big images. Therefore, the size of the resulting mosaic is now only limited by the size of the hard drive. Work is being carried out to apply image processing techniques to the navigated mosaic, with the intention of locally improving image alignment. Tests have been conducted using the data acquired during the cruise LUSTRE'96 (LUcky STRike Exploration, 37°17'N 32°17'W) by WHOI. During this cruise, the ARGO-II tethered vehicle acquired ~21,000 images in a ~1Km2 area of the seafloor to map at high resolution the geology of this hydrothermal field. The obtained geo-referenced photo-mosaic has a resolution of 1.5cm per pixel, with a coverage of ~25% of the Lucky Strike area. Data and software will be made publicly available.
Impact of A Neonatal-Bereavement-Support DVD on Parental Grief: A Randomized Controlled Trial
Rosenbaum, Joan L.; Smith, Joan R.; Yan, Yan; Abram, Nancy; Jeffe, Donna B.
2014-01-01
This study tested the effect of a neonatal-bereavement-support DVD on parental grief after their baby’s death in our Neonatal Intensive Care Unit compared with standard bereavement care (controls). Following a neonatal death, we measured grief change from 3- to 12-month follow-up using a mixed-effects model. Intent-to-treat analysis was not significant, but only 18 parents selectively watched the DVD. Thus, we subsequently compared DVD-viewers with DVD-non-viewers and controls. DVD-viewers reported higher grief at 3-month interviews compared with DVD-non-viewers and controls. Higher grief at 3 months was negatively correlated with social support and spiritual/religious beliefs. These findings have implications for neonatal-bereavement care. PMID:25530502
2017-12-08
This image depicts a vast canyon of dust and gas in the Orion Nebula from a 3-D computer model based on observations by NASA's Hubble Space Telescope and created by science visualization specialists at the Space Telescope Science Institute (STScI) in Baltimore, Md. A 3-D visualization of this model takes viewers on an amazing four-minute voyage through the 15-light-year-wide canyon. Credit: NASA, G. Bacon, L. Frattare, Z. Levay, and F. Summers (STScI/AURA) Go here to learn more about Hubble 3D: www.nasa.gov/topics/universe/features/hubble_imax_premier... or www.imax.com/hubble/ Take an exhilarating ride through the Orion Nebula, a vast star-making factory 1,500 light-years away. Swoop through Orion's giant canyon of gas and dust. Fly past behemoth stars whose brilliant light illuminates and energizes the entire cloudy region. Zoom by dusty tadpole-shaped objects that are fledgling solar systems. This virtual space journey isn't the latest video game but one of several groundbreaking astronomy visualizations created by specialists at the Space Telescope Science Institute (STScI) in Baltimore, the science operations center for NASA's Hubble Space Telescope. The cinematic space odysseys are part of the new Imax film "Hubble 3D," which opens today at select Imax theaters worldwide. The 43-minute movie chronicles the 20-year life of Hubble and includes highlights from the May 2009 servicing mission to the Earth-orbiting observatory, with footage taken by the astronauts. The giant-screen film showcases some of Hubble's breathtaking iconic pictures, such as the Eagle Nebula's "Pillars of Creation," as well as stunning views taken by the newly installed Wide Field Camera 3. While Hubble pictures of celestial objects are awe-inspiring, they are flat 2-D photographs. For this film, those 2-D images have been converted into 3-D environments, giving the audience the impression they are space travelers taking a tour of Hubble's most popular targets. "A large-format movie is a truly immersive experience," says Frank Summers, an STScI astronomer and science visualization specialist who led the team that developed the movie visualizations. The team labored for nine months, working on four visualization sequences that comprise about 12 minutes of the movie. "Seeing these Hubble images in 3-D, you feel like you are flying through space and not just looking at picture postcards," Summers continued. "The spacescapes are all based on Hubble images and data, though some artistic license is necessary to produce the full depth of field needed for 3-D." The most ambitious sequence is a four-minute voyage through the Orion Nebula's gas-and-dust canyon, about 15 light-years across. During the ride, viewers will see bright and dark, gaseous clouds; thousands of stars, including a grouping of bright, hefty stars called the Trapezium; and embryonic planetary systems. The tour ends with a detailed look at a young circumstellar disk, which is much like the structure from which our solar system formed 4.5 billion years ago. Based on a Hubble image of Orion released in 2006, the visualization was a collaborative effort between science visualization specialists at STScI, including Greg Bacon, who sculpted the Orion Nebula digital model, with input from STScI astronomer Massimo Roberto; the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign; and the Spitzer Science Center at the California Institute of Technology in Pasadena. For some of the sequences, STScI imaging specialists developed new techniques for transforming the 2-D Hubble images into 3-D. STScI image processing specialists Lisa Frattare and Zolt Levay, for example, created methods of splitting a giant gaseous pillar in the Carina Nebula into multiple layers to produce a 3-D effect, giving the structure depth. The Carina Nebula is a nursery for baby stars. Frattare painstakingly removed the thousands of stars in the image so that Levay could separate the gaseous layers on the isolated Carina pillar. Frattare then replaced the stars into both foreground and background layers to complete the 3-D model. For added effect, the same separation was done for both visible and infrared Hubble images, allowing the film to cross-fade between wavelength views in 3-D. In another sequence viewers fly into a field of 170,000 stars in the giant star cluster Omega Centauri. STScI astronomer Jay Anderson used his stellar database to create a synthetic star field in 3-D that matches recent razor-sharp Hubble photos. The film's final four-minute sequence takes viewers on a voyage from our Milky Way Galaxy past many of Hubble's best galaxy shots and deep into space. Some 15,000 galaxies from Hubble's deepest surveys stretch billions of light-years across the universe in a 3-D sequence created by STScI astronomers and visualizers. The view dissolves into a cobweb that traces the universe's large-scale structure, the backbone from which galaxies were born. In addition to creating visualizations, STScI's education group also provided guidance on the "Hubble 3D" Educator Guide, which includes standards-based lesson plans and activities about Hubble and its mission. Students will use the guide before or after seeing the movie. "The guide will enhance the movie experience for students and extend the movie into classrooms," says Bonnie Eisenhamer, STScI's Hubble Formal Education manager. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency (ESA) and is managed by NASA’s Goddard Space Flight Center (GSFC) in Greenbelt, Md. The Space Telescope Science Institute (STScI) conducts Hubble science operations. The institute is operated for NASA by the Association of Universities for Research in Astronomy, Inc., Washington, D.C.
Satoh, Hiroko; Oda, Tomohiro; Nakakoji, Kumiyo; Uno, Takeaki; Tanaka, Hiroaki; Iwata, Satoru; Ohno, Koichi
2016-11-08
This paper describes our approach that is built upon the potential energy surface (PES)-based conformational analysis. This approach automatically deduces a conformational transition network, called a conformational reaction route map (r-map), by using the Scaled Hypersphere Search of the Anharmonic Downward Distortion Following method (SHS-ADDF). The PES-based conformational search has been achieved by using large ADDF, which makes it possible to trace only low transition state (TS) barriers while restraining bond lengths and structures with high free energy. It automatically performs sampling the minima and TS structures by simply taking into account the mathematical feature of PES without requiring any a priori specification of variable internal coordinates. An obtained r-map is composed of equilibrium (EQ) conformers connected by reaction routes via TS conformers, where all of the reaction routes are already confirmed during the process of the deduction using the intrinsic reaction coordinate (IRC) method. The postcalculation analysis of the deduced r-map is interactively carried out using the RMapViewer software we have developed. This paper presents computational details of the PES-based conformational analysis and its application to d-glucose. The calculations have been performed for an isolated glucose molecule in the gas phase at the RHF/6-31G level. The obtained conformational r-map for α-d-glucose is composed of 201 EQ and 435 TS conformers and that for β-d-glucose is composed of 202 EQ and 371 TS conformers. For the postcalculation analysis of the conformational r-maps by using the RMapViewer software program we have found multiple minimum energy paths (MEPs) between global minima of 1 C 4 and 4 C 1 chair conformations. The analysis using RMapViewer allows us to confirm the thermodynamic and kinetic predominance of 4 C 1 conformations; that is, the potential energy of the global minimum of 4 C 1 is lower than that of 1 C 4 (thermodynamic predominance) and that the highest energy of those of all the TS structures along a route from 4 C 1 to 1 C 4 is lower than that of 1 C 4 to 4 C 1 (kinetic predominance).
Data-proximate Visualization via Unidata Cloud Technologies
NASA Astrophysics Data System (ADS)
Fisher, W. I.; Oxelson Ganter, J.; Weber, J.
2016-12-01
The rise in cloud computing, coupled with the growth of "Big Data", has lead to a migration away from local scientific data storage. The increasing size of remote scientific data sets increase, however, makes it difficult for scientists to subject them to large-scale analysis and visualization. These large datasets can take an inordinate amount of time to download; subsetting is a potential solution, but subsetting services are not yet ubiquitous. Data providers may also pay steep prices, as many cloud providers meter data based on how much data leaves their cloud service.The solution to this problem is a deceptively simple one; move data analysis and visualization tools to the cloud, so that scientists may perform data-proximate analysis and visualization. This results in increased transfer speeds, while egress costs are lowered or completely eliminated. The challenge now becomes creating tools which are cloud-ready.The solution to this challenge is provided by Application Streaming. This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations. When coupled with containerization technology such as Docker, we are able to easily deploy legacy analysis and visualization software to the cloud whilst retaining access via a desktop, netbook, a smartphone, or the next generation of hardware, whatever it may be.Unidata has harnessed Application Streaming to provide a cloud-capable version of our visualization software, the Integrated Data Viewer (IDV). This work will examine the challenges associated with adapting the IDV to an application streaming platform, and include a brief discussion of the underlying technologies involved.
Cloud-based data-proximate visualization and analysis
NASA Astrophysics Data System (ADS)
Fisher, Ward
2017-04-01
The rise in cloud computing, coupled with the growth of "Big Data", has lead to a migration away from local scientific data storage. The increasing size of remote scientific data sets increase, however, makes it difficult for scientists to subject them to large-scale analysis and visualization. These large datasets can take an inordinate amount of time to download; subsetting is a potential solution, but subsetting services are not yet ubiquitous. Data providers may also pay steep prices, as many cloud providers meter data based on how much data leaves their cloud service. The solution to this problem is a deceptively simple one; move data analysis and visualization tools to the cloud, so that scientists may perform data-proximate analysis and visualization. This results in increased transfer speeds, while egress costs are lowered or completely eliminated. The challenge now becomes creating tools which are cloud-ready. The solution to this challenge is provided by Application Streaming. This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations. When coupled with containerization technology such as Docker, we are able to easily deploy legacy analysis and visualization software to the cloud whilst retaining access via a desktop, netbook, a smartphone, or the next generation of hardware, whatever it may be. Unidata has harnessed Application Streaming to provide a cloud-capable version of our visualization software, the Integrated Data Viewer (IDV). This work will examine the challenges associated with adapting the IDV to an application streaming platform, and include a brief discussion of the underlying technologies involved.
NASA Astrophysics Data System (ADS)
Maibach, E.; Cullen, H. M.; Witte, J.
2013-12-01
Climate change is influencing every region of the nation through weather and climatic events including heat waves, droughts, extreme precipitation and floods, more intense hurricanes, and forest fires, yet most Americans continue to perceive climate change as a problem distant in time (with impacts a generation or more away), and in space (that will primarily affect other countries, not the United States). This may be caused, in part, due to the fact that climate change is often described in global, abstract, and analytical terms that are hard for people to connect to their own lives. The impacts of climate change, however, can be personally experienced at the local level, including through unusual weather events; cognitive science has shown that the human brain is more adept at learning through personal experience than through analytical reasoning. In this paper we will describe our efforts to enable America's TV weathercasters to embrace the role of climate educator. Weathercasters are a relatively small cohort of highly skilled communication professionals who are optimally positioned to reach a large majority of the American public, and help move their viewers beyond an abstract (distant) notion of global climate change and toward an understanding of climate change that is both local and concrete. Approximately 70% of American adults watch local TV news, and their primary reason for doing so is to learn about the weather. Our research has shown that TV weathercasters are second only to scientists and government science agencies as trusted sources of information about climate change. Our surveys have also shown that - in every region of the country - many TV weathercasters are willing to embrace the role of climate educator, if certain barriers can be overcome. Our experimental pilot-test - in Columbia, South Carolina - of a model developed to help overcome those barriers demonstrated that: when TV weathercasters make an effort to educate their viewers about the local ramifications of climate change, their viewers learn. Our current attempts to scale-up the model on a limited basis - in one state as a field experiment, and elsewhere around the nation on an uncontrolled basis - are showing promise in terms of attracting an increasing numbers of participating weathercasters. Lastly, professional associations that represent TV weathercasters (AMS and NWA), and government agencies that produce climate and weather data for meteorologists (NOAA and NASA), are committed to help scale up this model so that all interested TV weathercasters have easy access to localized information through which to educate their viewers about local weather and related implications of climate change. In sum, by engaging and empowering TV weathercasters as climate educators, we seek to increase public understanding of the relationships among climate, climate variability, climate change, weather extremes and community vulnerability, and we believe this model has considerable potential.
Campana, Florence; Rebollo, Ignacio; Urai, Anne; Wyart, Valentin; Tallon-Baudry, Catherine
2016-05-11
The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. Is information encoded at different levels of the visual system (local details in low-level areas vs global shapes in high-level areas) equally likely to become conscious? We designed new hierarchical stimuli and provide the first empirical evidence based on behavioral and MEG data that global information encoded at high levels of the visual hierarchy dominates perception. This result held both in the presence and in the absence of task demands. The preferential emergence of percepts at high levels can account for two properties of conscious vision, namely, the dominance of global percepts and the feeling of visual richness reported independently of the perception of local details. Copyright © 2016 the authors 0270-6474/16/365200-14$15.00/0.
["A male view?" Texts on feminism film theory].
Lippert, R
1994-11-01
The author traces the course taken by psychoanalytically oriented feminist film theory from its beginnings in the late seventies. She situates its origins in the Anglo-American debate about the exclusion of female subjectivity from the cinema and the new awareness of the problem of the cinematic mise-en-scène of the gaze, of "visual pleasure". First, massive criticism was levelled at the exclusively male/patriarchal gaze of the viewer, then emphasis centred around the specifically female gaze as a category in aesthetic theory. Ultimately, psychoanalytic feminist film theory has turned its attention to films for women, melodrams and early movies in an attempt to capture the respective historical forms of female subjectivity that they reflect.
A simple integrated system for electrophysiologic recordings in animals
Slater, Bernard J.; Miller, Neil R.; Bernstein, Steven L.; Flower, Robert W.
2009-01-01
This technical note describes a modification to a fundus camera that permits simultaneous recording of pattern electroretinograms (pERGs) and pattern visual evoked potentials (pVEPs). The modification consists of placing an organic light-emitting diode (OLED) in the split-viewer pathway of a fundus camera, in a plane conjugate to the subject’s pupil. In this way, a focused image of the OLED can be delivered to a precisely known location on the retina. The advantage of using an OLED is that it can achieve high luminance while maintaining high contrast, and with minimal degradation over time. This system is particularly useful for animal studies, especially when precise retinal positioning is required. PMID:19137347
PRay - A graphical user interface for interactive visualization and modification of rayinvr models
NASA Astrophysics Data System (ADS)
Fromm, T.
2016-01-01
PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development (https://sourceforge.net/projects/pray-plot-rayinvr/).
A survey of quality measures for gray-scale image compression
NASA Technical Reports Server (NTRS)
Eskicioglu, Ahmet M.; Fisher, Paul S.
1993-01-01
Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.