DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, Alan E.; Crow, Vernon L.; Payne, Deborah A.
Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a data visualization method includes accessing a plurality of initial documents at a first moment in time, first processing the initial documents providing processed initial documents, first identifying a plurality of first associations of the initial documents using the processed initial documents, generating a first visualization depicting the first associations, accessing a plurality of additional documents at a second moment in time after the first moment in time, second processing the additional documents providing processed additional documents, secondmore » identifying a plurality of second associations of the additional documents and at least some of the initial documents, wherein the second identifying comprises identifying using the processed initial documents and the processed additional documents, and generating a second visualization depicting the second associations.« less
Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition
2013-01-01
Discriminative Visual Recognition ∗ Felix X. Yu†, Liangliang Cao§, Rogerio S. Feris§, John R. Smith§, Shih-Fu Chang† † Columbia University § IBM T. J...for Designing Category-Level Attributes for Dis- criminative Visual Recognition [3]. We first provide an overview of the proposed ap- proach in...2013 to 00-00-2013 4. TITLE AND SUBTITLE Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition 5a
Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...
2017-02-16
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A.; Halsey, William; Dehoff, Ryan
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less
Visualization of the NASA ICON mission in 3d
NASA Astrophysics Data System (ADS)
Mendez, R. A., Jr.; Immel, T. J.; Miller, N.
2016-12-01
The ICON Explorer mission (http://icon.ssl.berkeley.edu) will provide several data products for the atmosphere and ionosphere after its launch in 2017. This project will support the mission by investigating the capability of these tools for visualization of current and predicted observatory characteristics and data acquisition. Visualization of this mission can be accomplished using tools like Google Earth or CesiumJS, as well assistance from Java or Python. Ideally we will bring this visualization into the homes of people without the need of additional software. The path of launching a standalone website, building this environment, and a full toolkit will be discussed. Eventually, the initial work could lead to the addition of a downloadable visualization packages for mission demonstration or science visualization.
DspaceOgre 3D Graphics Visualization Tool
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.
2011-01-01
This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.
Designing multifocal corneal models to correct presbyopia by laser ablation
NASA Astrophysics Data System (ADS)
Alarcón, Aixa; Anera, Rosario G.; Del Barco, Luis Jiménez; Jiménez, José R.
2012-01-01
Two multifocal corneal models and an aspheric model designed to correct presbyopia by corneal photoablation were evaluated. The design of each model was optimized to achieve the best visual quality possible for both near and distance vision. In addition, we evaluated the effect of myosis and pupil decentration on visual quality. The corrected model with the central zone for near vision provides better results since it requires less ablated corneal surface area, permits higher addition values, presents stabler visual quality with pupil-size variations and lower high-order aberrations.
When kinesthetic information is neglected in learning a Novel bimanual rhythmic coordination.
Zhu, Qin; Mirich, Todd; Huang, Shaochen; Snapp-Childs, Winona; Bingham, Geoffrey P
2017-08-01
Many studies have shown that rhythmic interlimb coordination involves perception of the coupled limb movements, and different sensory modalities can be used. Using visual displays to inform the coupled bimanual movement, novel bimanual coordination patterns can be learned with practice. A recent study showed that similar learning occurred without vision when a coach provided manual guidance during practice. The information provided via the two different modalities may be same (amodal) or different (modality specific). If it is different, then learning with both is a dual task, and one source of information might be used in preference to the other in performing the task when both are available. In the current study, participants learned a novel 90° bimanual coordination pattern without or with visual information in addition to kinesthesis. In posttest, all participants were tested without and with visual information in addition to kinesthesis. When tested with visual information, all participants exhibited performance that was significantly improved by practice. When tested without visual information, participants who practiced using only kinesthetic information showed improvement, but those who practiced with visual information in addition showed remarkably less improvement. The results indicate that (1) the information is not amodal, (2) use of a single type of information was preferred, and (3) the preferred information was visual. We also hypothesized that older participants might be more likely to acquire dual task performance given their greater experience of the two sensory modes in combination, but results were replicated with both 20- and 50-year-olds.
1980-12-01
primary and secondary visual cortex or in the secondary visual cortex itself. When the secondary visual cortex is electrically stimulated , the subject...effect enhances their excitability, which reduces the additional stimulation ( electrical or chemical) required to elicit an action potential. These...and the peripheral area with rods. The rods have a very low light intensity threshold and provide stimulation to optic nerve fibers for low light
VISUAL PLUMES MIXING ZONE MODELING SOFTWARE
The US Environmental Protection Agency has a history of developing plume models and providing technical assistance. The Visual Plumes model (VP) is a recent addition to the public-domain models available on the EPA Center for Exposure Assessment Modeling (CEAM) web page. The Wind...
VISUAL PLUMES MIXING ZONE MODELING SOFTWARE
The U.S. Environmental Protection Agency has a long history of both supporting plume model development and providing mixing zone modeling software. The Visual Plumes model is the most recent addition to the suite of public-domain models available through the EPA-Athens Center f...
GAC: Gene Associations with Clinical, a web based application.
Zhang, Xinyan; Rupji, Manali; Kowalski, Jeanne
2017-01-01
We present GAC, a shiny R based tool for interactive visualization of clinical associations based on high-dimensional data. The tool provides a web-based suite to perform supervised principal component analysis (SuperPC), an approach that uses both high-dimensional data, such as gene expression, combined with clinical data to infer clinical associations. We extended the approach to address binary outcomes, in addition to continuous and time-to-event data in our package, thereby increasing the use and flexibility of SuperPC. Additionally, the tool provides an interactive visualization for summarizing results based on a forest plot for both binary and time-to-event data. In summary, the GAC suite of tools provide a one stop shop for conducting statistical analysis to identify and visualize the association between a clinical outcome of interest and high-dimensional data types, such as genomic data. Our GAC package has been implemented in R and is available via http://shinygispa.winship.emory.edu/GAC/. The developmental repository is available at https://github.com/manalirupji/GAC.
Visual Multipoles And The Assessment Of Visual Sensitivity To Displayed Images
NASA Astrophysics Data System (ADS)
Klein, Stanley A.
1989-08-01
The contrast sensitivity function (CSF) is widely used to specify the sensitivity of the visual system. Each point of the CSF specifies the amount of contrast needed to detect a sinusoidal grating of a given spatial frequency. This paper describes a set of five mathematically related visual patterns, called "multipoles," that should replace the CSF for measuring visual performance. The five patterns (ramp, edge, line, dipole and quadrupole) are localized in space rather than being spread out as sinusoidal gratings. The multipole sensitivity of the visual system provides an alternative characterization that complements the CSF in addition to offering several advantages. This paper provides an overview of the properties and uses of the multipole stimuli. This paper is largely a summary of several unpublished manuscripts with excerpts from them. Derivations and full references are omitted here. Please write me if you would like the full manuscripts.
Primary Numberplay: "InterActivities" for the Discovery of Mathematics Concepts. User's Guide.
ERIC Educational Resources Information Center
Sullivan, W. Edward
This document plus diskette product provides nine interactive puzzles and games that both teach and provide practice with simple addition and subtraction concepts. The activities address these skills through carrying in addition and regrouping in subtraction. The activities address cognitive skills such as problem solving, planning, visual pattern…
Human microbiome visualization using 3D technology.
Moore, Jason H; Lari, Richard Cowper Sal; Hill, Douglas; Hibberd, Patricia L; Madan, Juliette C
2011-01-01
High-throughput sequencing technology has opened the door to the study of the human microbiome and its relationship with health and disease. This is both an opportunity and a significant biocomputing challenge. We present here a 3D visualization methodology and freely-available software package for facilitating the exploration and analysis of high-dimensional human microbiome data. Our visualization approach harnesses the power of commercial video game development engines to provide an interactive medium in the form of a 3D heat map for exploration of microbial species and their relative abundance in different patients. The advantage of this approach is that the third dimension provides additional layers of information that cannot be visualized using a traditional 2D heat map. We demonstrate the usefulness of this visualization approach using microbiome data collected from a sample of premature babies with and without sepsis.
The influence of attention, learning, and motivation on visual search.
Dodd, Michael D; Flowers, John H
2012-01-01
The 59th Annual Nebraska Symposium on Motivation (The Influence of Attention, Learning, and Motivation on Visual Search) took place April 7-8, 2011, on the University of Nebraska-Lincoln campus. The symposium brought together leading scholars who conduct research related to visual search at a variety levels for a series of talks, poster presentations, panel discussions, and numerous additional opportunities for intellectual exchange. The Symposium was also streamed online for the first time in the history of the event, allowing individuals from around the world to view the presentations and submit questions. The present volume is intended to both commemorate the event itself and to allow our speakers additional opportunity to address issues and current research that have since arisen. Each of the speakers (and, in some cases, their graduate students and post docs) has provided a chapter which both summarizes and expands on their original presentations. In this chapter, we sought to a) provide additional context as to how the Symposium came to be, b) discuss why we thought that this was an ideal time to organize a visual search symposium, and c) to briefly address recent trends and potential future directions in the field. We hope you find the volume both enjoyable and informative, and we thank the authors who have contributed a series of engaging chapters.
Best Visual Presentation--Observations from the Award Committee. IR Applications. Volume 4
ERIC Educational Resources Information Center
Bers, Trudy
2005-01-01
In 2003, the Association for Institutional Research (AIR) initiated the Best Visual Presentation (BVP) award to acknowledge the contributions made through new ways of professional communication, in addition to those made through more traditional scholarly formats. The purpose of this "IR Applications" is to provide observations from the BVP Award…
ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110
Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
Hominoid visual brain structure volumes and the position of the lunate sulcus.
de Sousa, Alexandra A; Sherwood, Chet C; Mohlberg, Hartmut; Amunts, Katrin; Schleicher, Axel; MacLeod, Carol E; Hof, Patrick R; Frahm, Heiko; Zilles, Karl
2010-04-01
It has been argued that changes in the relative sizes of visual system structures predated an increase in brain size and provide evidence of brain reorganization in hominins. However, data about the volume and anatomical limits of visual brain structures in the extant taxa phylogenetically closest to humans-the apes-remain scarce, thus complicating tests of hypotheses about evolutionary changes. Here, we analyze new volumetric data for the primary visual cortex and the lateral geniculate nucleus to determine whether or not the human brain departs from allometrically-expected patterns of brain organization. Primary visual cortex volumes were compared to lunate sulcus position in apes to investigate whether or not inferences about brain reorganization made from fossil hominin endocasts are reliable in this context. In contrast to previous studies, in which all species were relatively poorly sampled, the current study attempted to evaluate the degree of intraspecific variability by including numerous hominoid individuals (particularly Pan troglodytes and Homo sapiens). In addition, we present and compare volumetric data from three new hominoid species-Pan paniscus, Pongo pygmaeus, and Symphalangus syndactylus. These new data demonstrate that hominoid visual brain structure volumes vary more than previously appreciated. In addition, humans have relatively reduced primary visual cortex and lateral geniculate nucleus volumes as compared to allometric predictions from other hominoids. These results suggest that inferences about the position of the lunate sulcus on fossil endocasts may provide information about brain organization. Copyright 2010 Elsevier Ltd. All rights reserved.
Visualization of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.
Accessibility limits recall from visual working memory.
Rajsic, Jason; Swan, Garrett; Wilson, Daryl E; Pratt, Jay
2017-09-01
In this article, we demonstrate limitations of accessibility of information in visual working memory (VWM). Recently, cued-recall has been used to estimate the fidelity of information in VWM, where the feature of a cued object is reproduced from memory (Bays, Catalao, & Husain, 2009; Wilken & Ma, 2004; Zhang & Luck, 2008). Response error in these tasks has been largely studied with respect to failures of encoding and maintenance; however, the retrieval operations used in these tasks remain poorly understood. By varying the number and type of object features provided as a cue in a visual delayed-estimation paradigm, we directly assess the nature of retrieval errors in delayed estimation from VWM. Our results demonstrate that providing additional object features in a single cue reliably improves recall, largely by reducing swap, or misbinding, responses. In addition, performance simulations using the binding pool model (Swan & Wyble, 2014) were able to mimic this pattern of performance across a large span of parameter combinations, demonstrating that the binding pool provides a possible mechanism underlying this pattern of results that is not merely a symptom of one particular parametrization. We conclude that accessing visual working memory is a noisy process, and can lead to errors over and above those of encoding and maintenance limitations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Kassin, A.; Cody, R. P.; Barba, M.; Escarzaga, S. M.; Score, R.; Dover, M.; Gaylord, A. G.; Manley, W. F.; Habermann, T.; Tweedie, C. E.
2015-12-01
The Arctic Research Mapping Application (ARMAP; http://armap.org/) is a suite of online applications and data services that support Arctic science by providing project tracking information (who's doing what, when and where in the region) for United States Government funded projects. In collaboration with 17 research agencies, project locations are displayed in a visually enhanced web mapping application. Key information about each project is presented along with links to web pages that provide additional information. The mapping application includes new reference data layers and an updated ship tracks layer. Visual enhancements are achieved by redeveloping the front-end from FLEX to HTML5 and JavaScript, which now provide access to mobile users utilizing tablets and cell phone devices. New tools have been added that allow users to navigate, select, draw, measure, print, use a time slider, and more. Other module additions include a back-end Apache SOLR search platform that provides users with the capability to perform advance searches throughout the ARMAP database. Furthermore, a new query builder interface has been developed in order to provide more intuitive controls to generate complex queries. These improvements have been made to increase awareness of projects funded by numerous entities in the Arctic, enhance coordination for logistics support, help identify geographic gaps in research efforts and potentially foster more collaboration amongst researchers working in the region. Additionally, ARMAP can be used to demonstrate past, present, and future research efforts supported by the U.S. Government.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.
Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults
Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener. PMID:27031343
The NAS Computational Aerosciences Archive
NASA Technical Reports Server (NTRS)
Miceli, Kristina D.; Globus, Al; Lasinski, T. A. (Technical Monitor)
1995-01-01
In order to further the state-of-the-art in computational aerosciences (CAS) technology, researchers must be able to gather and understand existing work in the field. One aspect of this information gathering is studying published work available in scientific journals and conference proceedings. However, current scientific publications are very limited in the type and amount of information that they can disseminate. Information is typically restricted to text, a few images, and a bibliography list. Additional information that might be useful to the researcher, such as additional visual results, referenced papers, and datasets, are not available. New forms of electronic publication, such as the World Wide Web (WWW), limit publication size only by available disk space and data transmission bandwidth, both of which are improving rapidly. The Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Ames Research Center is in the process of creating an archive of CAS information on the WWW. This archive will be based on the large amount of information produced by researchers associated with the NAS facility. The archive will contain technical summaries and reports of research performed on NAS supercomputers, visual results (images, animations, visualization system scripts), datasets, and any other supporting meta-information. This information will be available via the WWW through the NAS homepage, located at http://www.nas.nasa.gov/, fully indexed for searching. The main components of the archive are technical summaries and reports, visual results, and datasets. Technical summaries are gathered every year by researchers who have been allotted resources on NAS supercomputers. These summaries, together with supporting visual results and references, are browsable by interested researchers. Referenced papers made available by researchers can be accessed through hypertext links. Technical reports are in-depth accounts of tools and applications research projects performed by NAS staff members and collaborators. Visual results, which may be available in the form of images, animations, and/or visualization scripts, are generated by researchers with respect to a certain research project, depicting dataset features that were determined important by the investigating researcher. For example, script files for visualization systems (e.g. FAST, PLOT3D, AVS) are provided to create visualizations on the user's local workstation to elucidate the key points of the numerical study. Users can then interact with the data starting where the investigator left off. Datasets are intended to give researchers an opportunity to understand previous work, 'mine' solutions for new information (for example, have you ever read a paper thinking "I wonder what the helicity density looks like?"), compare new techniques with older results, collaborate with remote colleagues, and perform validation. Supporting meta-information associated with the research projects is also important to provide additional context for research projects. This may include information such as the software used in the simulation (e.g. grid generators, flow solvers, visualization). In addition to serving the CAS research community, the information archive will also be helpful to students, visualization system developers and researchers, and management. Students (of any age) can use the data to study fluid dynamics, compare results from different flow solvers, learn about meshing techniques, etc., leading to better informed individuals. For these users it is particularly important that visualization be integrated into dataset archives. Visualization researchers can use dataset archives to test algorithms and techniques, leading to better visualization systems, Management can use the data to figure what is really going on behind the viewgraphs. All users will benefit from fast, easy, and convenient access to CFD datasets. The CAS information archive hopes to serve as a useful resource to those interested in computational sciences. At present, only information that may be distributed internationally is made available via the archive. Studies are underway to determine security requirements and solutions to make additional information available. By providing access to the archive via the WWW, the process of information gathering can be more productive and fruitful due to ease of access and ability to manage many different types of information. As the archive grows, additional resources from outside NAS will be added, providing a dynamic source of research results.
GAC: Gene Associations with Clinical, a web based application
Zhang, Xinyan; Rupji, Manali; Kowalski, Jeanne
2018-01-01
We present GAC, a shiny R based tool for interactive visualization of clinical associations based on high-dimensional data. The tool provides a web-based suite to perform supervised principal component analysis (SuperPC), an approach that uses both high-dimensional data, such as gene expression, combined with clinical data to infer clinical associations. We extended the approach to address binary outcomes, in addition to continuous and time-to-event data in our package, thereby increasing the use and flexibility of SuperPC. Additionally, the tool provides an interactive visualization for summarizing results based on a forest plot for both binary and time-to-event data. In summary, the GAC suite of tools provide a one stop shop for conducting statistical analysis to identify and visualize the association between a clinical outcome of interest and high-dimensional data types, such as genomic data. Our GAC package has been implemented in R and is available via http://shinygispa.winship.emory.edu/GAC/. The developmental repository is available at https://github.com/manalirupji/GAC. PMID:29263780
Aygün, Nurcihan; Uludağ, Mehmet; İşgör, Adnan
2017-01-01
Objective We evaluated the contribution of intraoperative neuromonitoring to the visual and functional identification of the external branch of the superior laryngeal nerve. Material and Methods The prospectively collected data of patients who underwent thyroid surgery with intraoperative neuromonitoring for external branch of the superior laryngeal nerve exploration were assessed retrospectively. The surface endotracheal tube-based Medtronic NIM3 intraoperative neuromonitoring device was used. The external branch of the superior laryngeal nerve function was evaluated by the cricothyroid muscle twitch. In addition, contribution of external branch of the superior laryngeal nerve to the vocal cord adduction was evaluated using electromyographic records. Results The study included data of 126 (female, 103; male, 23) patients undergoing thyroid surgery, with a mean age of 46.2±12.2 years (range, 18–75 years), and 215 neck sides were assessed. Two hundred and one (93.5%) of 215 external branch of the superior laryngeal nerves were identified, of which 60 (27.9%) were identified visually before being stimulated with a monopolar stimulator probe. Eighty-nine (41.4%) external branch of the superior laryngeal nerves were identified visually after being identified with a probe. Although 52 (24.1%) external branch of the superior laryngeal nerves were identified with a probe, they were not visualized. Intraoperative neuromonitoring provided a significant contribution to visual (p<0.001) and functional (p<0.001) identification of external branch of the superior laryngeal nerves. Additionally, positive electromyographic responses were recorded from 160 external branch of the superior laryngeal nerves (74.4%). Conclusion Intraoperative neuromonitoring provides an important contribution to visual and functional identification of external branch of the superior laryngeal nerves. We believe that it can not be predicted whether the external branch of the superior laryngeal nerve is at risk or not and the nerve is often invisible; thus, intraoperative neuromonitoring may routinely be used in superior pole dissection. Glottic electromyography response obtained via external branch of the superior laryngeal nerve stimulation provides quantifiable information in addition to the simple visualization of the cricothyroid muscle twitch. PMID:28944328
King, Andy J; Jensen, Jakob D; Davis, LaShara A; Carcioppolo, Nick
2014-01-01
There is a paucity of research on the visual images used in health communication messages and campaign materials. Even though many studies suggest further investigation of these visual messages and their features, few studies provide specific constructs or assessment tools for evaluating the characteristics of visual messages in health communication contexts. The authors conducted 2 studies to validate a measure of perceived visual informativeness (PVI), a message construct assessing visual messages presenting statistical or indexical information. In Study 1, a 7-item scale was created that demonstrated good internal reliability (α = .91), as well as convergent and divergent validity with related message constructs such as perceived message quality, perceived informativeness, and perceived attractiveness. PVI also converged with a preference for visual learning but was unrelated to a person's actual vision ability. In addition, PVI exhibited concurrent validity with a number of important constructs including perceived message effectiveness, decisional satisfaction, and three key public health theory behavior predictors: perceived benefits, perceived barriers, and self-efficacy. Study 2 provided more evidence that PVI is an internally reliable measure and demonstrates that PVI is a modifiable message feature that can be tested in future experimental work. PVI provides an initial step to assist in the evaluation and testing of visual messages in campaign and intervention materials promoting informed decision making and behavior change.
Creative Approaches to School Counseling: Using the Visual Expressive Arts as an Intervention
ERIC Educational Resources Information Center
Chibbaro, Julia S.; Camacho, Heather
2011-01-01
This paper examines the use of creative arts in school counseling. There is a specific focus on the use of visual arts, particularly such methods as drawing and painting. Existing literature, which supports the use of art in school counseling, provides the paper's rationale. In addition, the paper explores different art techniques that school…
PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices
ERIC Educational Resources Information Center
Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões
2013-01-01
This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…
Data Visualization Challenges and Opportunities in User-Oriented Application Development
NASA Astrophysics Data System (ADS)
Pilone, D.; Quinn, P.; Mitchell, A. E.; Baynes, K.; Shum, D.
2015-12-01
This talk introduces the audience to some of the very real challenges associated with visualizing data from disparate data sources as encountered during the development of real world applications. In addition to the fundamental challenges of dealing with the data and imagery, this talk discusses usability problems encountered while trying to provide interactive and user-friendly visualization tools. At the end of this talk the audience will be aware of some of the pitfalls of data visualization along with tools and techniques to help mitigate them. There are many sources of variable resolution visualizations of science data available to application developers including NASA's Global Imagery Browse Services (GIBS), however integrating and leveraging visualizations in modern applications faces a number of challenges, including: - Varying visualized Earth "tile sizes" resulting in challenges merging disparate sources - Multiple visualization frameworks and toolkits with varying strengths and weaknesses - Global composite imagery vs. imagery matching EOSDIS granule distribution - Challenges visualizing geographically overlapping data with different temporal bounds - User interaction with overlapping or collocated data - Complex data boundaries and shapes combined with multi-orbit data and polar projections - Discovering the availability of visualizations and the specific parameters, color palettes, and configurations used to produce them In addition to discussing the challenges and approaches involved in visualizing disparate data, we will discuss solutions and components we'll be making available as open source to encourage reuse and accelerate application development.
Human-computer interface including haptically controlled interactions
Anderson, Thomas G.
2005-10-11
The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.
Wide-Field Fundus Autofluorescence for Retinitis Pigmentosa and Cone/Cone-Rod Dystrophy.
Oishi, Akio; Oishi, Maho; Ogino, Ken; Morooka, Satoshi; Yoshimura, Nagahisa
2016-01-01
Retinitis pigmentosa and cone/cone-rod dystrophy are inherited retinal diseases characterized by the progressive loss of rod and/or cone photoreceptors. To evaluate the status of rod/cone photoreceptors and visual function, visual acuity and visual field tests, electroretinogram, and optical coherence tomography are typically used. In addition to these examinations, fundus autofluorescence (FAF) has recently garnered attention. FAF visualizes the intrinsic fluorescent material in the retina, which is mainly lipofuscin contained within the retinal pigment epithelium. While conventional devices offer limited viewing angles in FAF, the recently developed Optos machine enables recording of wide-field FAF. With wide-field analysis, an association between abnormal FAF areas and visual function was demonstrated in retinitis pigmentosa and cone-rod dystrophy. In addition, the presence of "patchy" hypoautofluorescent areas was found to be correlated with symptom duration. Although physicians should be cautious when interpreting wide-field FAF results because the peripheral parts of the image are magnified significantly, this examination method provides previously unavailable information.
Information processing in the primate visual system - An integrated systems perspective
NASA Technical Reports Server (NTRS)
Van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.
1992-01-01
The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.
Learning Reverse Engineering and Simulation with Design Visualization
NASA Technical Reports Server (NTRS)
Hemsworth, Paul J.
2018-01-01
The Design Visualization (DV) group supports work at the Kennedy Space Center by utilizing metrology data with Computer-Aided Design (CAD) models and simulations to provide accurate visual representations that aid in decision-making. The capability to measure and simulate objects in real time helps to predict and avoid potential problems before they become expensive in addition to facilitating the planning of operations. I had the opportunity to work on existing and new models and simulations in support of DV and NASA’s Exploration Ground Systems (EGS).
Manneristic behaviors of visually impaired children.
Molloy, Alysha; Rowe, Fiona J
2011-09-01
To review the literature on visual impairment in children in order to determine which manneristic behaviors are associated with visual impairment, and to establish why these behaviors occur and whether severity of visual impairment influences these behaviors. A literature search utilizing PubMed, OVID, Google Scholar, and Web of Knowledge databases was performed. The University of Liverpool ( www.liv.ac.uk/orthoptics/research ) and local library facilities were also searched. The main manneristic or stereotypic behaviors associated with visual impairment are eye-manipulatory behaviors, such as eye poking and rocking. The degree of visual impairment influences the type of behavior exhibited by visually impaired children. Totally blind children are more likely to adopt body and head movements whereas sight-impaired children tend to adopt eye-manipulatory behaviors and rocking. The mannerisms exhibited most frequently are those that provide a specific stimulation to the child. Theories to explain these behaviors include behavioral, developmental, functional, and neurobiological approaches. Although the precise etiology of these behaviors is unknown, it is recognized that each of the theories is useful in providing some explanation of why certain behaviors may occur. The age at which the frequency of these behaviors decreases is associated with the child's increasing development, thus those visually impaired children with additional disabilities, whose development is impaired, are at an increased risk of developing and maintaining these behaviors. Certain manneristic behaviors of the visually impaired child may also help indicate the cause of visual impairment. There is a wide range of manneristic behaviors exhibited by visually impaired children. Some of these behaviors appear to be particularly associated with certain causes of visual impairment or severity of visual impairment, thus they may supply the practitioner with useful information. Further research into the prevalence of these behaviors in the visually impaired child is required in order to provide effective management.
3D Visualization for Phoenix Mars Lander Science Operations
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol
2012-01-01
Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.
NASA Astrophysics Data System (ADS)
Pilone, D.; Quinn, P.; Mitchell, A. E.; Baynes, K.; Shum, D.
2014-12-01
This talk introduces the audience to some of the very real challenges associated with visualizing data from disparate data sources as encountered during the development of real world applications. In addition to the fundamental challenges of dealing with the data and imagery, this talk discusses usability problems encountered while trying to provide interactive and user-friendly visualization tools. At the end of this talk the audience will be aware of some of the pitfalls of data visualization along with tools and techniques to help mitigate them. There are many sources of variable resolution visualizations of science data available to application developers including NASA's Global Imagery Browse Services (GIBS), however integrating and leveraging visualizations in modern applications faces a number of challenges, including: - Varying visualized Earth "tile sizes" resulting in challenges merging disparate sources - Multiple visualization frameworks and toolkits with varying strengths and weaknesses - Global composite imagery vs. imagery matching EOSDIS granule distribution - Challenges visualizing geographically overlapping data with different temporal bounds - User interaction with overlapping or collocated data - Complex data boundaries and shapes combined with multi-orbit data and polar projections - Discovering the availability of visualizations and the specific parameters, color palettes, and configurations used to produce them In addition to discussing the challenges and approaches involved in visualizing disparate data, we will discuss solutions and components we'll be making available as open source to encourage reuse and accelerate application development.
Comparative analysis and visualization of multiple collinear genomes
2012-01-01
Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897
Tablet and Smartphone Accessibility Features in the Low Vision Rehabilitation
Irvine, Danielle; Zemke, Alex; Pusateri, Gregg; Gerlach, Leah; Chun, Rob; Jay, Walter M.
2014-01-01
Abstract Tablet and smartphone use is rapidly increasing in developed countries. With this upsurge in popularity, the devices themselves are becoming more user-friendly for all consumers, including the visually impaired. Traditionally, visually impaired patients have received optical rehabilitation in the forms of microscopes, stand magnifiers, handheld magnifiers, telemicroscopes, and electronic magnification such as closed circuit televisions (CCTVs). In addition to the optical and financial limitations of traditional devices, patients do not always view them as being socially acceptable. For this reason, devices are often underutilised by patients due to lack of use in public forums or when among peers. By incorporating smartphones and tablets into a patient’s low vision rehabilitation, in addition to traditional devices, one provides versatile and mainstream options, which may also be less expensive. This article explains exactly what the accessibility features of tablets and smartphones are for the blind and visually impaired, how to access them, and provides an introduction on usage of the features. PMID:27928274
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A
Interactive data visualization leverages human visual perception and cognition to improve the accuracy and effectiveness of data analysis. When combined with automated data analytics, data visualization systems orchestrate the strengths of humans with the computational power of machines to solve problems neither approach can manage in isolation. In the intelligent transportation system domain, such systems are necessary to support decision making in large and complex data streams. In this chapter, we provide an introduction to several key topics related to the design of data visualization systems. In addition to an overview of key techniques and strategies, we will describe practicalmore » design principles. The chapter is concluded with a detailed case study involving the design of a multivariate visualization tool.« less
Harnessing the web information ecosystem with wiki-based visualization dashboards.
McKeon, Matt
2009-01-01
We describe the design and deployment of Dashiki, a public website where users may collaboratively build visualization dashboards through a combination of a wiki-like syntax and interactive editors. Our goals are to extend existing research on social data analysis into presentation and organization of data from multiple sources, explore new metaphors for these activities, and participate more fully in the web!s information ecology by providing tighter integration with real-time data. To support these goals, our design includes novel and low-barrier mechanisms for editing and layout of dashboard pages and visualizations, connection to data sources, and coordinating interaction between visualizations. In addition to describing these technologies, we provide a preliminary report on the public launch of a prototype based on this design, including a description of the activities of our users derived from observation and interviews.
Visualization of permanent marks in progressive addition lenses by digital in-line holography
NASA Astrophysics Data System (ADS)
Perucho, Beatriz; Micó, Vicente
2013-04-01
A critical issue in the production of ophthalmic lenses is to guarantee the correct centering and alignment throughout the manufacturing and mounting processes. Aimed to that, progressive addition lenses (PALs) incorporate permanent marks at standardized locations at the lens. Those marks are engraved upon the surface and provide the model identification and addition power of the PAL, as well as to serve as locator marks to re-ink the removable marks again if necessary. Although the permanent marks should be visible by simple visual inspection, those marks are often faint and weak on new lenses providing low contrast, obscured by scratches on older lenses, and partially occluded and difficult to recognize on tinted or anti-reflection coated lenses. In this contribution, we present an extremely simple visualization system for permanent marks in PALs based on digital in-line holography. Light emitted by a superluminescent diode (SLD) is used to illuminate the PAL which is placed just before a digital (CCD) sensor. Thus, the CCD records an in-line hologram incoming from the diffracted wavefront provided by the PAL. As a result, it is possible to recover an in-focus image of the PAL inspected region by means of classical holographic tools applied in the digital domain. This numerical process involves digital recording of the in-line hologram, numerical back propagation to the PAL plane, and some digital processing to reduce noise and present a high quality final image. Preliminary experimental results are provided showing the applicability of the proposed method.
An Evaluation of Wellness Assessment Visualizations for Older Adults
Reeder, Blaine; Yoo, Daisy; Aziz, Rafae; Thompson, Hilaire J.; Demiris, George
2015-01-01
Abstract Background Smart home technologies provide a valuable resource to unobtrusively monitor health and wellness within an older adult population. However, the breadth and density of data available along with aging associated decreases in working memory, prospective memory, spatial cognition, and processing speed can make it challenging to comprehend for older adults. We developed visualizations of smart home health data integrated into a framework of wellness. We evaluated the visualizations through focus groups with older adults and identified recommendations to guide the future development of visualizations. Materials and Methods We conducted four focus groups with older adult participants (n=31) at an independent retirement community. Participants were presented with three different visualizations from a wellness pilot study. A qualitative descriptive analysis was conducted to identify thematic content. Results We identified three themes related to processing and application of visualizations: (1) values of visualizations for wellness assessment, (2) cognitive processing approaches to visualizations, and (3) integration of health data for visualization. In addition, the focus groups highlighted key design considerations of visualizations important towards supporting decision-making and evaluation assessments within integrated health displays. Conclusions Participants found inherent value in having visualizations available to proactively engage with their healthcare provider. Integrating the visualizations into a wellness framework helped reduce the complexity of raw smart home data. There has been limited work on health visualizations from a consumer perspective, in particular for an older adult population. Creating appropriately designed visualizations is valuable towards promoting consumer involvement within the shared decision-making process of care. PMID:25401414
2011-01-01
The goal of visual analytics is to facilitate the discourse between the user and the data by providing dynamic displays and versatile visual interaction opportunities with the data that can support analytical reasoning and the exploration of data from multiple user-customisable aspects. This paper introduces geospatial visual analytics, a specialised subtype of visual analytics, and provides pointers to a number of learning resources about the subject, as well as some examples of human health, surveillance, emergency management and epidemiology-related geospatial visual analytics applications and examples of free software tools that readers can experiment with, such as Google Public Data Explorer. The authors also present a practical demonstration of geospatial visual analytics using partial data for 35 countries from a publicly available World Health Organization (WHO) mortality dataset and Microsoft Live Labs Pivot technology, a free, general purpose visual analytics tool that offers a fresh way to visually browse and arrange massive amounts of data and images online and also supports geographic and temporal classifications of datasets featuring geospatial and temporal components. Interested readers can download a Zip archive (included with the manuscript as an additional file) containing all files, modules and library functions used to deploy the WHO mortality data Pivot collection described in this paper. PMID:21410968
Kamel Boulos, Maged N; Viangteeravat, Teeradache; Anyanwu, Matthew N; Ra Nagisetty, Venkateswara; Kuscu, Emin
2011-03-16
The goal of visual analytics is to facilitate the discourse between the user and the data by providing dynamic displays and versatile visual interaction opportunities with the data that can support analytical reasoning and the exploration of data from multiple user-customisable aspects. This paper introduces geospatial visual analytics, a specialised subtype of visual analytics, and provides pointers to a number of learning resources about the subject, as well as some examples of human health, surveillance, emergency management and epidemiology-related geospatial visual analytics applications and examples of free software tools that readers can experiment with, such as Google Public Data Explorer. The authors also present a practical demonstration of geospatial visual analytics using partial data for 35 countries from a publicly available World Health Organization (WHO) mortality dataset and Microsoft Live Labs Pivot technology, a free, general purpose visual analytics tool that offers a fresh way to visually browse and arrange massive amounts of data and images online and also supports geographic and temporal classifications of datasets featuring geospatial and temporal components. Interested readers can download a Zip archive (included with the manuscript as an additional file) containing all files, modules and library functions used to deploy the WHO mortality data Pivot collection described in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keefer, Donald A.; Shaffer, Eric G.; Storsved, Brynne
A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing jointmore » visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less
RVA: A Plugin for ParaView 3.14
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-04
RVA is a plugin developed for the 64-bit Windows version of the ParaView 3.14 visualization package. RVA is designed to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed onmore » enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less
Improvements and Additions to NASA Near Real-Time Earth Imagery
NASA Technical Reports Server (NTRS)
Cechini, Matthew; Boller, Ryan; Baynes, Kathleen; Schmaltz, Jeffrey; DeLuca, Alexandar; King, Jerome; Thompson, Charles; Roberts, Joe; Rodriguez, Joshua; Gunnoe, Taylor;
2016-01-01
For many years, the NASA Global Imagery Browse Services (GIBS) has worked closely with the Land, Atmosphere Near real-time Capability for EOS (Earth Observing System) (LANCE) system to provide near real-time imagery visualizations of AIRS (Atmospheric Infrared Sounder), MLS (Microwave Limb Sounder), MODIS (Moderate Resolution Imaging Spectrometer), OMI (Ozone Monitoring Instrument), and recently VIIRS (Visible Infrared Imaging Radiometer Suite) science parameters. These visualizations are readily available through standard web services and the NASA Worldview client. Access to near real-time imagery provides a critical capability to GIBS and Worldview users. GIBS continues to focus on improving its commitment to providing near real-time imagery for end-user applications. The focus of this presentation will be the following completed or planned GIBS system and imagery enhancements relating to near real-time imagery visualization.
Link between orientation and retinotopic maps in primary visual cortex
Paik, Se-Bum; Ringach, Dario L.
2012-01-01
Maps representing the preference of neurons for the location and orientation of a stimulus on the visual field are a hallmark of primary visual cortex. It is not yet known how these maps develop and what function they play in visual processing. One hypothesis postulates that orientation maps are initially seeded by the spatial interference of ON- and OFF-center retinal receptive field mosaics. Here we show that such a mechanism predicts a link between the layout of orientation preferences around singularities of different signs and the cardinal axes of the retinotopic map. Moreover, we confirm the predicted relationship holds in tree shrew primary visual cortex. These findings provide additional support for the notion that spatially structured input from the retina may provide a blueprint for the early development of cortical maps and receptive fields. More broadly, it raises the possibility that spatially structured input from the periphery may shape the organization of primary sensory cortex of other modalities as well. PMID:22509015
Real-time Magnetic Resonance Imaging Guidance for Cardiovascular Procedures
Horvath, Keith A.; Li, Ming; Mazilu, Dumitru; Guttman, Michael A.; McVeigh, Elliot R.
2008-01-01
Magnetic resonance imaging (MRI) of the cardiovascular system has proven to be an invaluable diagnostic tool. Given the ability to allow for real-time imaging, MRI guidance of intraoperative procedures can provide superb visualization which can facilitate a variety of interventions and minimize the trauma of the operations as well. In addition to the anatomic detail, MRI can provide intraoperative assessment of organ and device function. Instruments and devices can be marked to enhance visualization and tracking. All of which is an advance over standard x-ray or ultrasonic imaging. PMID:18395633
ERIC Educational Resources Information Center
Katsioloudis, Petros J.; Stefaniak, Jill E.
2018-01-01
Results from a number of studies indicate that the use of drafting models can positively influence the spatial visualization ability for engineering technology students. However, additional variables such as light, temperature, motion and color can play an important role but research provides inconsistent results. Considering this, a set of 5…
A visualization system for CT based pulmonary fissure analysis
NASA Astrophysics Data System (ADS)
Pu, Jiantao; Zheng, Bin; Park, Sang Cheol
2009-02-01
In this study we describe a visualization system of pulmonary fissures depicted on CT images. The purpose is to provide clinicians with an intuitive perception of a patient's lung anatomy through an interactive examination of fissures, enhancing their understanding and accurate diagnosis of lung diseases. This system consists of four key components: (1) region-of-interest segmentation; (2) three-dimensional surface modeling; (3) fissure type classification; and (4) an interactive user interface, by which the extracted fissures are displayed flexibly in different space domains including image space, geometric space, and mixed space using simple toggling "on" and "off" operations. In this system, the different visualization modes allow users not only to examine the fissures themselves but also to analyze the relationship between fissures and their surrounding structures. In addition, the users can adjust thresholds interactively to visualize the fissure surface under different scanning and processing conditions. Such a visualization tool is expected to facilitate investigation of structures near the fissures and provide an efficient "visual aid" for other applications such as treatment planning and assessment of therapeutic efficacy as well as education of medical professionals.
Geoscience data visualization and analysis using GeoMapApp
NASA Astrophysics Data System (ADS)
Ferrini, Vicki; Carbotte, Suzanne; Ryan, William; Chan, Samantha
2013-04-01
Increased availability of geoscience data resources has resulted in new opportunities for developing visualization and analysis tools that not only promote data integration and synthesis, but also facilitate quantitative cross-disciplinary access to data. Interdisciplinary investigations, in particular, frequently require visualizations and quantitative access to specialized data resources across disciplines, which has historically required specialist knowledge of data formats and software tools. GeoMapApp (www.geomapapp.org) is a free online data visualization and analysis tool that provides direct quantitative access to a wide variety of geoscience data for a broad international interdisciplinary user community. While GeoMapApp provides access to online data resources, it can also be packaged to work offline through the deployment of a small portable hard drive. This mode of operation can be particularly useful during field programs to provide functionality and direct access to data when a network connection is not possible. Hundreds of data sets from a variety of repositories are directly accessible in GeoMapApp, without the need for the user to understand the specifics of file formats or data reduction procedures. Available data include global and regional gridded data, images, as well as tabular and vector datasets. In addition to basic visualization and data discovery functionality, users are provided with simple tools for creating customized maps and visualizations and to quantitatively interrogate data. Specialized data portals with advanced functionality are also provided for power users to further analyze data resources and access underlying component datasets. Users may import and analyze their own geospatial datasets by loading local versions of geospatial data and can access content made available through Web Feature Services (WFS) and Web Map Services (WMS). Once data are loaded in GeoMapApp, a variety options are provided to export data and/or 2D/3D visualizations into common formats including grids, images, text files, spreadsheets, etc. Examples of interdisciplinary investigations that make use of GeoMapApp visualization and analysis functionality will be provided.
Does ear endoscopy provide advantages in the outpatient management of open mastoidectomy cavities?
Freire, Gustavo Subtil Magalhães; Sampaio, Andre Luiz Lopes; Lopes, Rafaela Aquino Fernandes; Nakanishi, Márcio; de Oliveira, Carlos Augusto Costa Pires
2018-01-01
To evaluate the use of ear endoscopy in the postoperative management of open mastoidectomy cavities, and to test whether ear endoscopy improves inspection and cleaning compared with ear microscopy. Prospective study. Thirty-two ears were divided into two groups: group 1, examination and cleaning of mastoid cavities under endoscopic visualization after microscopic standard ear cleaning; group 2, examination and cleaning of mastoid cavities under microscopic visualization after endoscope-assisted ear cleaning. We assessed the ability of each method to provide exposure and facilitate cleaning, comparing the benefits of microscopy and endoscopy when used sequentially and vice-versa. Endoscopy provided additional benefits for exposure in 61.1% of cases and cleaning in 66.7%. Microscopy provided no additional benefits in terms of exposure in any case, and provided added benefit for cleaning in only 21.4% of cases. For outpatient postoperative care of open mastoidectomy cavities, ear endoscopy provides greater benefit over ear microscopy than vice-versa. In over half of all cases, endoscopy was able to expose areas not visualized under the microscope. Furthermore, in two-thirds of cases, endoscopy enabled removal of material that could not be cleared under microscopy. Ear endoscopy was superior to microscopy in terms of enabling exposure and cleaning of hard-to-reach sites, due to its wider field of vision. Ear endoscopy is a feasible technique for the postoperative management of open mastoidectomy cavities. Ear endoscopy provided superior advantages in terms of exposure and aural cleaning compared with microscopy.
Solar System Treks: Interactive Web Portals or STEM, Exploration and Beyond
NASA Astrophysics Data System (ADS)
Law, E.; Day, B. H.; Viotti, M.
2017-12-01
NASA's Solar System Treks project produces a suite of online visualization and analysis tools for lunar and planetary mapping and modeling. These portals offer great benefits for education and public outreach, providing access to data from a wide range of instruments aboard a variety of past and current missions. As a component of NASA's STEM Activation Infrastructure, they are available as resources for NASA STEM programs, and to the greater STEM community. As new missions are planned to a variety of planetary bodies, these tools facilitate public understanding of the missions and engage the public in the process of identifying and selecting where these missions will land. There are currently three web portals in the program: Moon Trek (https://moontrek.jpl.nasa.gov), Mars Trek (https://marstrek.jpl.nasa.gov), and Vesta Trek (https://vestatrek.jpl.nasa.gov). A new release of Mars Trek includes new tools and data products focusing on human landing site selection. Backed by evidence-based cognitive and computer science findings, an additional version is available for educational and public audiences in support of earning along novice-to-expert pathways, enabling authentic, real-world interaction with planetary data. Portals for additional planetary bodies are planned. As web-based toolsets, the portals do not require users to purchase or install any software beyond current web browsers. The portals provide analysis tools for measurement and study of planetary terrain. They allow data to be layered and adjusted to optimize visualization. Visualizations are easily stored and shared. The portals provide 3D visualization and give users the ability to mark terrain for generation of STL/OBJ files that can be directed to 3D printers. Such 3D prints are valuable tools in museums, public exhibits, and classrooms - especially for the visually impaired. The program supports additional clients, web services, and APIs facilitating dissemination of planetary data to external applications and venues. NASA challenges and hackathons also provide members of the software development community opportunities to participate in tool development and leverage data from the portals.
Fengler, Ineke; Nava, Elena; Röder, Brigitte
2015-01-01
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment
NASA Astrophysics Data System (ADS)
Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.
2006-12-01
The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.
Computerized visual feedback: an adjunct to robotic-assisted gait training.
Banz, Raphael; Bolliger, Marc; Colombo, Gery; Dietz, Volker; Lünenburger, Lars
2008-10-01
Robotic devices for walking rehabilitation allow new possibilities for providing performance-related information to patients during gait training. Based on motor learning principles, augmented feedback during robotic-assisted gait training might improve the rehabilitation process used to regain walking function. This report presents a method to provide visual feedback implemented in a driven gait orthosis (DGO). The purpose of the study was to compare the immediate effect on motor output in subjects during robotic-assisted gait training when they used computerized visual feedback and when they followed verbal instructions of a physical therapist. Twelve people with neurological gait disorders due to incomplete spinal cord injury participated. Subjects were instructed to walk within the DGO in 2 different conditions. They were asked to increase their motor output by following the instructions of a therapist and by observing visual feedback. In addition, the subjects' opinions about using visual feedback were investigated by a questionnaire. Computerized visual feedback and verbal instructions by the therapist were observed to result in a similar change in motor output in subjects when walking within the DGO. Subjects reported that they were more motivated and concentrated on their movements when using computerized visual feedback compared with when no form of feedback was provided. Computerized visual feedback is a valuable adjunct to robotic-assisted gait training. It represents a relevant tool to increase patients' motor output, involvement, and motivation during gait training, similar to verbal instructions by a therapist.
General visual robot controller networks via artificial evolution
NASA Astrophysics Data System (ADS)
Cliff, David; Harvey, Inman; Husbands, Philip
1993-08-01
We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.
Glyph-based generic network visualization
NASA Astrophysics Data System (ADS)
Erbacher, Robert F.
2002-03-01
Network managers and system administrators have an enormous task set before them in this day of growing network usage. This is particularly true of e-commerce companies and others dependent on a computer network for their livelihood. Network managers and system administrators must monitor activity for intrusions and misuse while at the same time monitoring performance of the network. In this paper, we describe our visualization techniques for assisting in the monitoring of networks for both of these tasks. The goal of these visualization techniques is to integrate the visual representation of both network performance/usage as well as data relevant to intrusion detection. The main difficulties arise from the difference in the intrinsic data and layout needs of each of these tasks. Glyph based techniques are additionally used to indicate the representative values of the necessary data parameters over time. Additionally, our techniques are geared towards providing an environment that can be used continuously for constant real-time monitoring of the network environment.
Wiese, Holger; Schweinberger, Stefan R
2015-01-01
The present study examined whether semantic memory for newly learned people is structured by visual co-occurrence, shared semantics, or both. Participants were trained with pairs of simultaneously presented (i.e., co-occurring) preexperimentally unfamiliar faces, which either did or did not share additionally provided semantic information (occupation, place of living, etc.). Semantic information could also be shared between faces that did not co-occur. A subsequent priming experiment revealed faster responses for both co-occurrence/no shared semantics and no co-occurrence/shared semantics conditions, than for an unrelated condition. Strikingly, priming was strongest in the co-occurrence/shared semantics condition, suggesting additive effects of these factors. Additional analysis of event-related brain potentials yielded priming in the N400 component only for combined effects of visual co-occurrence and shared semantics, with more positive amplitudes in this than in the unrelated condition. Overall, these findings suggest that both semantic relatedness and visual co-occurrence are important when novel information is integrated into person-related semantic memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Christopher J; Ahrens, James P; Wang, Jun
2010-10-15
Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar tomore » other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.« less
BioMon: A Google Earth Based Continuous Biomass Monitoring System (Demo Paper)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju
2009-01-01
We demonstrate a Google Earth based novel visualization system for continuous monitoring of biomass at regional and global scales. This system is integrated with a back-end spatiotemporal data mining system that continuously detects changes using high temporal resolution MODIS images. In addition to the visualization, we demonstrate novel query features of the system that provides insights into the current conditions of the landscape.
ViSBARD: Visual System for Browsing, Analysis and Retrieval of Data
NASA Astrophysics Data System (ADS)
Roberts, D. Aaron; Boller, Ryan; Rezapkin, V.; Coleman, J.; McGuire, R.; Goldstein, M.; Kalb, V.; Kulkarni, R.; Luckyanova, M.; Byrnes, J.; Kerbel, U.; Candey, R.; Holmes, C.; Chimiak, R.; Harris, B.
2018-04-01
ViSBARD interactively visualizes and analyzes space physics data. It provides an interactive integrated 3-D and 2-D environment to determine correlations between measurements across many spacecraft. It supports a variety of spacecraft data products and MHD models and is easily extensible to others. ViSBARD provides a way of visualizing multiple vector and scalar quantities as measured by many spacecraft at once. The data are displayed three-dimesionally along the orbits which may be displayed either as connected lines or as points. The data display allows the rapid determination of vector configurations, correlations between many measurements at multiple points, and global relationships. With the addition of magnetohydrodynamic (MHD) model data, this environment can also be used to validate simulation results with observed data, use simulated data to provide a global context for sparse observed data, and apply feature detection techniques to the simulated data.
Why Do Pictures, but Not Visual Words, Reduce Older Adults’ False Memories?
Smith, Rebekah E.; Hunt, R. Reed; Dunlap, Kathryn R.
2015-01-01
Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both the case of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment we provide the first simultaneous comparison of all three study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. PMID:26213799
Why do pictures, but not visual words, reduce older adults' false memories?
Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R
2015-09-01
Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
A methodology for coupling a visual enhancement device to human visual attention
NASA Astrophysics Data System (ADS)
Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman
2009-02-01
The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.
Multiplexing in the primate motion pathway.
Huk, Alexander C
2012-06-01
This article begins by reviewing recent work on 3D motion processing in the primate visual system. Some of these results suggest that 3D motion signals may be processed in the same circuitry already known to compute 2D motion signals. Such "multiplexing" has implications for the study of visual cortical circuits and neural signals. A more explicit appreciation of multiplexing--and the computations required for demultiplexing--may enrich the study of the visual system by emphasizing the importance of a structured and balanced "encoding/decoding" framework. In addition to providing a fresh perspective on how successive stages of visual processing might be approached, multiplexing also raises caveats about the value of "neural correlates" for understanding neural computation.
Visual probes and methods for placing visual probes into subsurface areas
Clark, Don T.; Erickson, Eugene E.; Casper, William L.; Everett, David M.
2004-11-23
Visual probes and methods for placing visual probes into subsurface areas in either contaminated or non-contaminated sites are described. In one implementation, the method includes driving at least a portion of a visual probe into the ground using direct push, sonic drilling, or a combination of direct push and sonic drilling. Such is accomplished without providing an open pathway for contaminants or fugitive gases to reach the surface. According to one implementation, the invention includes an entry segment configured for insertion into the ground or through difficult materials (e.g., concrete, steel, asphalt, metals, or items associated with waste), at least one extension segment configured to selectively couple with the entry segment, at least one push rod, and a pressure cap. Additional implementations are contemplated.
Data Visualization and Storytelling: Students Showcasing Innovative Work on the NASA Hyperwall
NASA Astrophysics Data System (ADS)
Hankin, E. R.; Hasan, M.; Williams, B. M.; Harwell, D. E.
2017-12-01
Visual storytelling can be used to quickly and effectively tell a story about data and scientific research, with powerful visuals driving a deeper level of engagement. In 2016, the American Geophysical Union (AGU) launched a pilot contest with a grant from NASA to fund students to travel to the AGU Fall Meeting to present innovative data visualizations with fascinating stories on the NASA Hyperwall. This presentation will discuss the purpose of the contest and provide highlights. Additionally, the presentation will feature Mejs Hasan, one of the 2016 contest grand prize winners, who will discuss her award-winning research utilizing Landsat visual data, MODIS Enhanced Vegetation Index data, and NOAA nightlight data to study the effects of both drought and war on the Middle East.
Filtrates and Residues: Spectrophotometry: Mechanics and Measurement.
ERIC Educational Resources Information Center
Diehl-Jones, Susan M.
1984-01-01
Provided are experiments to acquaint students with basic spectrophotometer components and their functions, to use the instrument in an openended-experiment, and to use Beer's Law in several different ways. In addition, the detectability (tolerance) of the spectrophotometer with visual detection limits is provided as an optional activity. (JN)
The development of organized visual search
Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.
2013-01-01
Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560
Rohrschneider, K; Mackensen, I
2013-04-01
Since 1868, the Department of Ophthalmology at the University of Heidelberg has been providing care for the pupils of the school for blind and visually handicapped children in Ilvesheim, Germany. Previous studies on the causes of low vision have demonstrated the effects of the advances in medicine and ophthalmology with an explicit decrease in the number of inflammatory corneal diseases, followed by a reduced number of students suffering from congenital cataract and glaucoma. The aim of the present study was to evaluate current data and to compare it to previous data. Ophthalmological data and additional disorders could be evaluated in 268 students visiting the special education school Schloßschule Ilvesheim between 2000 and 2008. The findings were compared to the results of previous studies concerning the degree of visual impairment and diagnosis. The children were divided according to German social law into blind, severely visually handicapped and visually handicapped. Out of the 268 students 83 (31.0%) were premature infants and 69 of these had additional disabilities, 130 were blind and 51 severely visually handicapped. Of the students 142 had additional learning, mental and/or motor handicaps. The most frequent cause of blindness or severe visual impairment was optic nerve atrophy (36.2 % and 37.3 %, respectively). The frequency of hereditary retinal diseases among the blind children was slightly higher with 24.6 % as compared to the data analysis from 1981 and was 15.7 % and 17.1 % among the severely visually handicapped and visually handicapped, respectively. Retinopathy of prematurity was diagnosed in approximately 20% of blind and severely visually handicapped children. As a result of the enormous advances of medical capabilities during the last decades the number of (formerly) premature infants has markedly increased. Most of these students are multiply handicapped and need extensive assistance. While the number of students suffering from hereditary retinal diseases was only minimally increasing during the last 40 years, the number of blind students without additional disabilities has decreased due to the improved technical means to integrate even blind students into main-stream schools.
NASA Astrophysics Data System (ADS)
Sorce, Salvatore; Malizia, Alessio; Jiang, Pingfei; Atherton, Mark; Harrison, David
2018-04-01
One of the main time and money consuming tasks in the design of industrial devices and parts is the checking of possible patent infringements. Indeed, the great number of documents to be mined and the wide variety of technical language used to describe inventions are reasons why considerable amounts of time may be needed. On the other hand, the early detection of a possible patent conflict, in addition to reducing the risk of legal disputes, could stimulate a designers’ creativity to overcome similarities in overlapping patents. For this reason, there are a lot of existing patent analysis systems, each with its own features and access modes. We have designed a visual interface providing an intuitive access to such systems, freeing the designers from the specific knowledge of querying languages and providing them with visual clues. We tested the interface on a framework aimed at representing mechanical engineering patents; the framework is based on a semantic database and provides patent conflict analysis for early-stage designs. The interface supports a visual query composition to obtain a list of potentially overlapping designs.
Secure videoconferencing equipment switching system and method
Hansen, Michael E [Livermore, CA
2009-01-13
A switching system and method are provided to facilitate use of videoconference facilities over a plurality of security levels. The system includes a switch coupled to a plurality of codecs and communication networks. Audio/Visual peripheral components are connected to the switch. The switch couples control and data signals between the Audio/Visual peripheral components and one but nor both of the plurality of codecs. The switch additionally couples communication networks of the appropriate security level to each of the codecs. In this manner, a videoconferencing facility is provided for use on both secure and non-secure networks.
Health figures: an open source JavaScript library for health data visualization.
Ledesma, Andres; Al-Musawi, Mohammed; Nieminen, Hannu
2016-03-22
The way we look at data has a great impact on how we can understand it, particularly when the data is related to health and wellness. Due to the increased use of self-tracking devices and the ongoing shift towards preventive medicine, better understanding of our health data is an important part of improving the general welfare of the citizens. Electronic Health Records, self-tracking devices and mobile applications provide a rich variety of data but it often becomes difficult to understand. We implemented the hFigures library inspired on the hGraph visualization with additional improvements. The purpose of the library is to provide a visual representation of the evolution of health measurements in a complete and useful manner. We researched the usefulness and usability of the library by building an application for health data visualization in a health coaching program. We performed a user evaluation with Heuristic Evaluation, Controlled User Testing and Usability Questionnaires. In the Heuristics Evaluation the average response was 6.3 out of 7 points and the Cognitive Walkthrough done by usability experts indicated no design or mismatch errors. In the CSUQ usability test the system obtained an average score of 6.13 out of 7, and in the ASQ usability test the overall satisfaction score was 6.64 out of 7. We developed hFigures, an open source library for visualizing a complete, accurate and normalized graphical representation of health data. The idea is based on the concept of the hGraph but it provides additional key features, including a comparison of multiple health measurements over time. We conducted a usability evaluation of the library as a key component of an application for health and wellness monitoring. The results indicate that the data visualization library was helpful in assisting users in understanding health data and its evolution over time.
Helping Educators Find Visualizations and Teaching Materials Just-in-Time
NASA Astrophysics Data System (ADS)
McDaris, J.; Manduca, C. A.; MacDonald, R. H.
2005-12-01
Major events and natural disasters like hurricanes and tsunamis provide geoscience educators with powerful teachable moments to engage their students with class content. In order to take advantage of these opportunities, educators need quality topical resources related to current earth science events. The web has become an excellent vehicle for disseminating this type of resource. In response to the 2004 Indian Ocean Earthquake and to Hurricane Katrina's devastating impact on the US Gulf Coast, the On the Cutting Edge professional development program developed collections of visualizations for use in teaching. (serc.carleton.edu/NAGTWorkshops/visualization/collections/ tsunami.html,serc.carleton.edu/NAGTWorkshops/visualization/ collections/hurricanes.html). These sites are collections of links to visualizations and other materials that can support the efforts of faculty, teachers, and those engaged in public outreach. They bring together resources created by researchers, government agencies and respected media sources and organize them for easy use by educators. Links are selected to provide a variety of different types of visualizations (e.g photographic images, animations, satellite imagery) and to assist educators in teaching about the geologic event reported in the news, associated Earth science concepts, and related topics of high interest. The cited links are selected from quality sources and are reviewed by SERC staff before being included on the page. Geoscience educators are encouraged to recommend links and supporting materials and to comment on the available resources. In this way the collection becomes more complete and its quality is enhanced.. These sites have received substantial use (Tsunami - 77,000 visitors in the first 3 months, Hurricanes - 2500 visitors in the first week) indicating that in addition to use by educators, they are being used by the general public seeking information about the events. Thus they provide an effective mechanism for guiding the public to quality resources created by geoscience researchers and facilities, in addition to supporting incorporation of geoscience research in education.
The multiple sclerosis visual pathway cohort: understanding neurodegeneration in MS.
Martínez-Lapiscina, Elena H; Fraga-Pumar, Elena; Gabilondo, Iñigo; Martínez-Heras, Eloy; Torres-Torres, Ruben; Ortiz-Pérez, Santiago; Llufriu, Sara; Tercero, Ana; Andorra, Magi; Roca, Marc Figueras; Lampert, Erika; Zubizarreta, Irati; Saiz, Albert; Sanchez-Dalmau, Bernardo; Villoslada, Pablo
2014-12-15
Multiple Sclerosis (MS) is an immune-mediated disease of the Central Nervous System with two major underlying etiopathogenic processes: inflammation and neurodegeneration. The latter determines the prognosis of this disease. MS is the main cause of non-traumatic disability in middle-aged populations. The MS-VisualPath Cohort was set up to study the neurodegenerative component of MS using advanced imaging techniques by focusing on analysis of the visual pathway in a middle-aged MS population in Barcelona, Spain. We started the recruitment of patients in the early phase of MS in 2010 and it remains permanently open. All patients undergo a complete neurological and ophthalmological examination including measurements of physical and disability (Expanded Disability Status Scale; Multiple Sclerosis Functional Composite and neuropsychological tests), disease activity (relapses) and visual function testing (visual acuity, color vision and visual field). The MS-VisualPath protocol also assesses the presence of anxiety and depressive symptoms (Hospital Anxiety and Depression Scale), general quality of life (SF-36) and visual quality of life (25-Item National Eye Institute Visual Function Questionnaire with the 10-Item Neuro-Ophthalmic Supplement). In addition, the imaging protocol includes both retinal (Optical Coherence Tomography and Wide-Field Fundus Imaging) and brain imaging (Magnetic Resonance Imaging). Finally, multifocal Visual Evoked Potentials are used to perform neurophysiological assessment of the visual pathway. The analysis of the visual pathway with advance imaging and electrophysilogical tools in parallel with clinical information will provide significant and new knowledge regarding neurodegeneration in MS and provide new clinical and imaging biomarkers to help monitor disease progression in these patients.
Real-time new satellite product demonstration from microwave sensors and GOES-16 at NRL TC web
NASA Astrophysics Data System (ADS)
Cossuth, J.; Richardson, K.; Surratt, M. L.; Bankert, R.
2017-12-01
The Naval Research Laboratory (NRL) Tropical Cyclone (TC) satellite webpage (https://www.nrlmry.navy.mil/TC.html) provides demonstration analyses of storm imagery to benefit operational TC forecast centers around the world. With the availability of new spectral information provided by GOES-16 satellite data and recent research into improved visualization methods of microwave data, experimental imagery was operationally tested to visualize the structural changes of TCs during the 2017 hurricane season. This presentation provides an introduction into these innovative satellite analysis methods, NRL's next generation satellite analysis system (the Geolocated Information Processing System, GeoIPSTM), and demonstration the added value of additional spectral frequencies when monitoring storms in near-realtime.
Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F
2007-01-01
Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818
Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F
2007-10-15
Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.
VAAPA: a web platform for visualization and analysis of alternative polyadenylation.
Guan, Jinting; Fu, Jingyi; Wu, Mingcheng; Chen, Longteng; Ji, Guoli; Quinn Li, Qingshun; Wu, Xiaohui
2015-02-01
Polyadenylation [poly(A)] is an essential process during the maturation of most mRNAs in eukaryotes. Alternative polyadenylation (APA) as an important layer of gene expression regulation has been increasingly recognized in various species. Here, a web platform for visualization and analysis of alternative polyadenylation (VAAPA) was developed. This platform can visualize the distribution of poly(A) sites and poly(A) clusters of a gene or a section of a chromosome. It can also highlight genes with switched APA sites among different conditions. VAAPA is an easy-to-use web-based tool that provides functions of poly(A) site query, data uploading, downloading, and APA sites visualization. It was designed in a multi-tier architecture and developed based on Smart GWT (Google Web Toolkit) using Java as the development language. VAAPA will be a valuable addition to the community for the comprehensive study of APA, not only by making the high quality poly(A) site data more accessible, but also by providing users with numerous valuable functions for poly(A) site analysis and visualization. Copyright © 2014 Elsevier Ltd. All rights reserved.
A generalized 3D framework for visualization of planetary data.
NASA Astrophysics Data System (ADS)
Larsen, K. W.; De Wolfe, A. W.; Putnam, B.; Lindholm, D. M.; Nguyen, D.
2016-12-01
As the volume and variety of data returned from planetary exploration missions continues to expand, new tools and technologies are needed to explore the data and answer questions about the formation and evolution of the solar system. We have developed a 3D visualization framework that enables the exploration of planetary data from multiple instruments on the MAVEN mission to Mars. This framework not only provides the opportunity for cross-instrument visualization, but is extended to include model data as well, helping to bridge the gap between theory and observation. This is made possible through the use of new web technologies, namely LATIS, a data server that can stream data and spacecraft ephemerides to a web browser, and Cesium, a Javascript library for 3D globes. The common visualization framework we have developed is flexible and modular so that it can easily be adapted for additional missions. In addition to demonstrating the combined data and modeling capabilities of the system for the MAVEN mission, we will display the first ever near real-time `QuickLook', interactive, 4D data visualization for the Magnetospheric Multiscale Mission (MMS). In this application, data from all four spacecraft can be manipulated and visualized as soon as the data is ingested into the MMS Science Data Center, less than one day after collection.
NASA's Lunar and Planetary Mapping and Modeling Program
NASA Astrophysics Data System (ADS)
Law, E.; Day, B. H.; Kim, R. M.; Bui, B.; Malhotra, S.; Chang, G.; Sadaqathullah, S.; Arevalo, E.; Vu, Q. A.
2016-12-01
NASA's Lunar and Planetary Mapping and Modeling Program produces a suite of online visualization and analysis tools. Originally designed for mission planning and science, these portals offer great benefits for education and public outreach (EPO), providing access to data from a wide range of instruments aboard a variety of past and current missions. As a component of NASA's Science EPO Infrastructure, they are available as resources for NASA STEM EPO programs, and to the greater EPO community. As new missions are planned to a variety of planetary bodies, these tools are facilitating the public's understanding of the missions and engaging the public in the process of identifying and selecting where these missions will land. There are currently three web portals in the program: the Lunar Mapping and Modeling Portal or LMMP (http://lmmp.nasa.gov), Vesta Trek (http://vestatrek.jpl.nasa.gov), and Mars Trek (http://marstrek.jpl.nasa.gov). Portals for additional planetary bodies are planned. As web-based toolsets, the portals do not require users to purchase or install any software beyond current web browsers. The portals provide analysis tools for measurement and study of planetary terrain. They allow data to be layered and adjusted to optimize visualization. Visualizations are easily stored and shared. The portals provide 3D visualization and give users the ability to mark terrain for generation of STL files that can be directed to 3D printers. Such 3D prints are valuable tools in museums, public exhibits, and classrooms - especially for the visually impaired. Along with the web portals, the program supports additional clients, web services, and APIs that facilitate dissemination of planetary data to a range of external applications and venues. NASA challenges and hackathons are also providing members of the software development community opportunities to participate in tool development and leverage data from the portals.
Retinal and visual system: occupational and environmental toxicology.
Fox, Donald A
2015-01-01
Occupational chemical exposure often results in sensory systems alterations that occur without other clinical signs or symptoms. Approximately 3000 chemicals are toxic to the retina and central visual system. Their dysfunction can have immediate, long-term, and delayed effects on mental health, physical health, and performance and lead to increased occupational injuries. The aims of this chapter are fourfold. First, provide references on retinal/visual system structure, function, and assessment techniques. Second, discuss the retinal features that make it especially vulnerable to toxic chemicals. Third, review the clinical and corresponding experimental data regarding retinal/visual system deficits produced by occupational toxicants: organic solvents (carbon disulfide, trichloroethylene, tetrachloroethylene, styrene, toluene, and mixtures) and metals (inorganic lead, methyl mercury, and mercury vapor). Fourth, discuss occupational and environmental toxicants as risk factors for late-onset retinal diseases and degeneration. Overall, the toxicants altered color vision, rod- and/or cone-mediated electroretinograms, visual fields, spatial contrast sensitivity, and/or retinal thickness. The findings elucidate the importance of conducting multimodal noninvasive clinical, electrophysiologic, imaging and vision testing to monitor toxicant-exposed workers for possible retinal/visual system alterations. Finally, since the retina is a window into the brain, an increased awareness and understanding of retinal/visual system dysfunction should provide additional insight into acquired neurodegenerative disorders. © 2015 Elsevier B.V. All rights reserved.
Wavefront-Guided Scleral Lens Correction in Keratoconus
Marsack, Jason D.; Ravikumar, Ayeswarya; Nguyen, Chi; Ticak, Anita; Koenig, Darren E.; Elswick, James D.; Applegate, Raymond A.
2014-01-01
Purpose To examine the performance of state-of-the-art wavefront-guided scleral contact lenses (wfgSCLs) on a sample of keratoconic eyes, with emphasis on performance quantified with visual quality metrics; and to provide a detailed discussion of the process used to design, manufacture and evaluate wfgSCLs. Methods Fourteen eyes of 7 subjects with keratoconus were enrolled and a wfgSCL was designed for each eye. High-contrast visual acuity and visual quality metrics were used to assess the on-eye performance of the lenses. Results The wfgSCL provided statistically lower levels of both lower-order RMS (p < 0.001) and higher-order RMS (p < 0.02) than an intermediate spherical equivalent scleral contact lens. The wfgSCL provided lower levels of lower-order RMS than a normal group of well-corrected observers (p < < 0.001). However, the wfgSCL does not provide less higher-order RMS than the normal group (p = 0.41). Of the 14 eyes studied, 10 successfully reached the exit criteria, achieving residual higher-order root mean square wavefront error (HORMS) less than or within 1 SD of the levels experienced by normal, age-matched subjects. In addition, measures of visual image quality (logVSX, logNS and logLIB) for the 10 eyes were well distributed within the range of values seen in normal eyes. However, visual performance as measured by high contrast acuity did not reach normal, age-matched levels, which is in agreement with prior results associated with the acute application of wavefront correction to KC eyes. Conclusions Wavefront-guided scleral contact lenses are capable of optically compensating for the deleterious effects of higher-order aberration concomitant with the disease, and can provide visual image quality equivalent to that seen in normal eyes. Longer duration studies are needed to assess whether the visual system of the highly aberrated eye wearing a wfgSCL is capable of producing visual performance levels typical of the normal population. PMID:24830371
A Novel Marking Reader for Progressive Addition Lenses Based on Gabor Holography.
Perucho, Beatriz; Picazo-Bueno, José Angel; Micó, Vicente
2016-05-01
Progressive addition lenses (PALs) are marked with permanent engraved marks (PEMs) at standardized locations. Permanent engraved marks are very useful through the manufacturing and mounting processes, act as locator marks to re-ink the removable marks, and contain useful information about the PAL. However, PEMs are often faint and weak, obscured by scratches, partially occluded, and difficult to recognize on tinted lenses or with antireflection or scratch-resistant coatings. The aim of this article is to present a new generation of portable marking reader based on an extremely simplified concept for visualization and identification of PEMs in PALs. Permanent engraved marks on different PALs are visualized using classical Gabor holography as underlying principle. Gabor holography allows phase sample visualization with adjustable magnification and can be implemented in either classical or digital versions. Here, visual Gabor holography is used to provide a magnified defocused image of the PEMs onto a translucent visualization screen where the PEM is clearly identified. Different types of PALs (conventional, personalized, old and scratched, sunglasses, etc.) have been tested to visualize PEMs with the proposed marking reader. The PEMs are visible in every case, and variable magnification factor can be achieved simply moving up and down the PAL in the instrument. In addition, a second illumination wavelength is also tested, showing the applicability of this novel marking reader for different illuminations. A new concept of marking reader ophthalmic instrument has been presented and validated in the laboratory. The configuration involves only a commercial-grade laser diode and a visualization screen for PEM identification. The instrument is portable, economic, and easy to use, and it can be used for identifying patient's current PAL model and for marking removable PALs again or finding test points regardless of the age of the PAL, its scratches, tints, or coatings.
Kuo, Fang-Chuan; Wang, Nai-Hwei; Hong, Chang-Zern
2010-11-01
A cross-sectional study of balance control in adolescents with idiopathic scoliosis (AIS). To investigate the impact of visual and somatosensory deprivation on the dynamic balance in AIS patients and to discuss electromyographic (EMG) and posture sway findings. Most studies focus on posture sway in quiet standing controls with little effort on examining muscle-activated patterns in dynamic standing controls. Twenty-two AIS patients and 22 age-matched normal subjects were studied. To understand how visual and somatosensory information could modulate standing balance, balance tests with the Biodex stability system were performed on a moving platform under 3 conditions: visual feedback provided (VF), eyes closed (EC), and standing on a sponge pad with visual feedback provided (SV). Muscular activities of bilateral lumbar multifidi, gluteus medii, and gastrocnemii muscles were recorded with a telemetry EMG system. AIS patients had normal balance index and amplitude and duration of EMG similar to those of normal subjects in the balance test. However, the onset latency of right gastrocnemius was earlier in AIS patients than in normal subjects. In addition, body-side asymmetry was noted on muscle strength and onset latency in AIS subjects. Under EC condition, lumbar multifidi, and gluteus medii activities were higher than those under SV and VF conditions (P < 0.05). Under SV condition, the medial-lateral tilting angle was less than that under VF and EC conditions. In addition, the active duration of right gluteus medius was shorter under SV condition (P < 0.05). The dynamic balance control is particularly disruptive under visual deprivation with increasing lumbar multifidi and gluteus medii activities for compensation. Sponge pad can cause decrease in frontal plane tilting and gluteus medii effort. The asymmetric muscle strength and onset timing are attributed to anatomic deformation as opposed to neurologic etiological factors.
Optogenetic Assessment of Horizontal Interactions in Primary Visual Cortex
Huang, Xiaoying; Elyada, Yishai M.; Bosking, William H.; Walker, Theo
2014-01-01
Columnar organization of orientation selectivity and clustered horizontal connections linking orientation columns are two of the distinctive organizational features of primary visual cortex in many mammalian species. However, the functional role of these connections has been harder to characterize. Here we examine the extent and nature of horizontal interactions in V1 of the tree shrew using optical imaging of intrinsic signals, optogenetic stimulation, and multi-unit recording. Surprisingly, we find the effects of optogenetic stimulation depend primarily on distance and not on the specific orientation domains or axes in the cortex, which are stimulated. In addition, across a wide range of variation in both visual and optogenetic stimulation we find linear addition of the two inputs. These results emphasize that the cortex provides a rich substrate for functional interactions that are not limited to the orientation-specific interactions predicted by the monosynaptic distribution of horizontal connections. PMID:24695715
Wang, Xiaoying; Peelen, Marius V; Han, Zaizhu; He, Chenxi; Caramazza, Alfonso; Bi, Yanchao
2015-09-09
Classical animal visual deprivation studies and human neuroimaging studies have shown that visual experience plays a critical role in shaping the functionality and connectivity of the visual cortex. Interestingly, recent studies have additionally reported circumscribed regions in the visual cortex in which functional selectivity was remarkably similar in individuals with and without visual experience. Here, by directly comparing resting-state and task-based fMRI data in congenitally blind and sighted human subjects, we obtained large-scale continuous maps of the degree to which connectional and functional "fingerprints" of ventral visual cortex depend on visual experience. We found a close agreement between connectional and functional maps, pointing to a strong interdependence of connectivity and function. Visual experience (or the absence thereof) had a pronounced effect on the resting-state connectivity and functional response profile of occipital cortex and the posterior lateral fusiform gyrus. By contrast, connectional and functional fingerprints in the anterior medial and posterior lateral parts of the ventral visual cortex were statistically indistinguishable between blind and sighted individuals. These results provide a large-scale mapping of the influence of visual experience on the development of both functional and connectivity properties of visual cortex, which serves as a basis for the formulation of new hypotheses regarding the functionality and plasticity of specific subregions. Significance statement: How is the functionality and connectivity of the visual cortex shaped by visual experience? By directly comparing resting-state and task-based fMRI data in congenitally blind and sighted subjects, we obtained large-scale continuous maps of the degree to which connectional and functional "fingerprints" of ventral visual cortex depend on visual experience. In addition to revealing regions that are strongly dependent on visual experience (early visual cortex and posterior fusiform gyrus), our results showed regions in which connectional and functional patterns are highly similar in blind and sighted individuals (anterior medial and posterior lateral ventral occipital temporal cortex). These results serve as a basis for the formulation of new hypotheses regarding the functionality and plasticity of specific subregions of the visual cortex. Copyright © 2015 the authors 0270-6474/15/3512545-15$15.00/0.
Mental Imagery and Visual Working Memory
Keogh, Rebecca; Pearson, Joel
2011-01-01
Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory - but not iconic visual memory - can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage. PMID:22195024
Mental imagery and visual working memory.
Keogh, Rebecca; Pearson, Joel
2011-01-01
Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.
High-level user interfaces for transfer function design with semantics.
Salama, Christof Rezk; Keller, Maik; Kohlmann, Peter
2006-01-01
Many sophisticated techniques for the visualization of volumetric data such as medical data have been published. While existing techniques are mature from a technical point of view, managing the complexity of visual parameters is still difficult for non-expert users. To this end, this paper presents new ideas to facilitate the specification of optical properties for direct volume rendering. We introduce an additional level of abstraction for parametric models of transfer functions. The proposed framework allows visualization experts to design high-level transfer function models which can intuitively be used by non-expert users. The results are user interfaces which provide semantic information for specialized visualization problems. The proposed method is based on principal component analysis as well as on concepts borrowed from computer animation.
Distributed visualization of gridded geophysical data: the Carbon Data Explorer, version 0.2.3
NASA Astrophysics Data System (ADS)
Endsley, K. A.; Billmire, M. G.
2016-01-01
Due to the proliferation of geophysical models, particularly climate models, the increasing resolution of their spatiotemporal estimates of Earth system processes, and the desire to easily share results with collaborators, there is a genuine need for tools to manage, aggregate, visualize, and share data sets. We present a new, web-based software tool - the Carbon Data Explorer - that provides these capabilities for gridded geophysical data sets. While originally developed for visualizing carbon flux, this tool can accommodate any time-varying, spatially explicit scientific data set, particularly NASA Earth system science level III products. In addition, the tool's open-source licensing and web presence facilitate distributed scientific visualization, comparison with other data sets and uncertainty estimates, and data publishing and distribution.
STRING 3: An Advanced Groundwater Flow Visualization Tool
NASA Astrophysics Data System (ADS)
Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph
2016-04-01
The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of neighboring faces is extracted. Similar algorithms help to find the 2D boundary of cuts through the 3D model. As interactivity plays a big role for an exploration tool the speed of the drawing routines is also important. To achieve this, different pathlet rendering solutions have been developed and benchmarked. These provide a trade-off between the usage of geometry and fragment shaders. We show that point sprite shaders have superior performance and visual quality over geometry-based approaches. Admittedly, the point sprite-based approach has many non-trivial problems of joining the different parts of the pathlet geometry. This research is funded by the Federal Ministry for Economic Affairs and Energy (Germany). [1] T. Seidel, C. König, M. Schäfer, I. Ostermann, T. Biedert, D. Hietel (2014). Intuitive visualization of transient groundwater flow. Computers & Geosciences, Vol. 67, pp. 173-179 [2] I. Michel, S. Schröder, T. Seidel, C. König (2015). Intuitive Visualization of Transient Flow: Towards a Full 3D Tool. Geophysical Research Abstracts, Vol. 17, EGU2015-1670 [3] S. Schröder, I. Michel, T. Seidel, C.M. König (2015). STRING 3: Full 3D visualization of groundwater Flow. In Proceedings of IAMG 2015 Freiberg, pp. 813-822
Hayashi, Ken; Manabe, Shin-Ichi; Hayashi, Hideyuki
2009-12-01
To compare visual acuity from far to near, contrast visual acuity, and acuity in the presence of glare (glare visual acuity) between an aspheric diffractive multifocal intraocular lens (IOL) with a low addition (add) power (+3.0 diopters) and a monofocal IOL. Hayashi Eye Hospital, Fukuoka, Japan. This prospective study comprised patients having implantation of an aspheric diffractive multifocal ReSTOR SN6AD1 IOL with a +3.0 D add (multifocal group) or a monofocal AcrySof IQ SN60WF IOL (monofocal group). Visual acuity from far to near distances, contrast acuity, and glare acuity were evaluated 3 months postoperatively. Each IOL group comprised 64 eyes of 32 patients. For monocular and binocular visual acuity, the mean uncorrected and distance-corrected intermediate acuity at 0.5 m and the near acuity at 0.3 m were significantly better in the multifocal group than in the monofocal group (P=.0035); distance and intermediate acuity at 0.7 m and 1.0 m were similar between the 2 groups. No significant differences were observed between groups in contrast acuity and glare acuity under photopic and mesopic conditions. Furthermore, no significant correlation was found between all-distance acuity and pupil diameter or between visual acuity and IOL decentration and tilt. The diffractive multifocal IOL with a low add power provided significantly better intermediate and near visual acuity than the monofocal IOL. Contrast sensitivity with and without glare was reduced with the multifocal IOL, and all-distance visual acuity was independent of pupil diameter and IOL displacement.
A Visual Analytics Approach for Station-Based Air Quality Data
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-01-01
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117
Varma, Gopal; Clough, Rachel E; Acher, Peter; Sénégas, Julien; Dahnke, Hannes; Keevil, Stephen F; Schaeffter, Tobias
2011-05-01
In magnetic resonance imaging, implantable devices are usually visualized with a negative contrast. Recently, positive contrast techniques have been proposed, such as susceptibility gradient mapping (SGM). However, SGM reduces the spatial resolution making positive visualization of small structures difficult. Here, a development of SGM using the original resolution (SUMO) is presented. For this, a filter is applied in k-space and the signal amplitude is analyzed in the image domain to determine quantitatively the susceptibility gradient for each pixel. It is shown in simulations and experiments that SUMO results in a better visualization of small structures in comparison to SGM. SUMO is applied to patient datasets for visualization of stent and prostate brachytherapy seeds. In addition, SUMO also provides quantitative information about the number of prostate brachytherapy seeds. The method might be extended to application for visualization of other interventional devices, and, like SGM, it might also be used to visualize magnetically labelled cells. Copyright © 2010 Wiley-Liss, Inc.
A Visual Analytics Approach for Station-Based Air Quality Data.
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-12-24
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.
Flocks, James
2006-01-01
Scientific knowledge from the past century is commonly represented by two-dimensional figures and graphs, as presented in manuscripts and maps. Using today's computer technology, this information can be extracted and projected into three- and four-dimensional perspectives. Computer models can be applied to datasets to provide additional insight into complex spatial and temporal systems. This process can be demonstrated by applying digitizing and modeling techniques to valuable information within widely used publications. The seminal paper by D. Frazier, published in 1967, identified 16 separate delta lobes formed by the Mississippi River during the past 6,000 yrs. The paper includes stratigraphic descriptions through geologic cross-sections, and provides distribution and chronologies of the delta lobes. The data from Frazier's publication are extensively referenced in the literature. Additional information can be extracted from the data through computer modeling. Digitizing and geo-rectifying Frazier's geologic cross-sections produce a three-dimensional perspective of the delta lobes. Adding the chronological data included in the report provides the fourth-dimension of the delta cycles, which can be visualized through computer-generated animation. Supplemental information can be added to the model, such as post-abandonment subsidence of the delta-lobe surface. Analyzing the regional, net surface-elevation balance between delta progradations and land subsidence is computationally intensive. By visualizing this process during the past 4,500 yrs through multi-dimensional animation, the importance of sediment compaction in influencing both the shape and direction of subsequent delta progradations becomes apparent. Visualization enhances a classic dataset, and can be further refined using additional data, as well as provide a guide for identifying future areas of study.
From Visual Exploration to Storytelling and Back Again.
Gratzl, S; Lex, A; Gehlenborg, N; Cosgrove, N; Streit, M
2016-06-01
The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract).
From Visual Exploration to Storytelling and Back Again
Gratzl, S.; Lex, A.; Gehlenborg, N.; Cosgrove, N.; Streit, M.
2016-01-01
The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author “Vistories”, visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract) PMID:27942091
Top-down influence on the visual cortex of the blind during sensory substitution
Murphy, Matthew C.; Nau, Amy C.; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S.; Chan, Kevin C.
2017-01-01
Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. PMID:26584776
SU-F-T-91: Development of Real Time Abdominal Compression Force (ACF) Monitoring System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T; Kim, D; Kang, S
Purpose: Hard-plate based abdominal compression is known to be effective, but no explicit method exists to quantify abdominal compression force (ACF) and maintain the proper ACF through the whole procedure. In addition, even with compression, it is necessary to do 4D CT to manage residual motion but, 4D CT is often not possible due to reduced surrogating sensitivity. In this study, we developed and evaluated a system that both monitors ACF in real time and provides surrogating signal even under compression. The system can also provide visual-biofeedback. Methods: The system developed consists of a compression plate, an ACF monitoring unitmore » and a visual-biofeedback device. The ACF monitoring unit contains a thin air balloon in the size of compression plate and a gas pressure sensor. The unit is attached to the bottom of the plate thus, placed between the plate and the patient when compression is applied, and detects compression pressure. For reliability test, 3 volunteers were directed to take several different breathing patterns and the ACF variation was compared with the respiratory flow and external respiratory signal to assure that the system provides corresponding behavior. In addition, guiding waveform were generated based on free breathing, and then applied for evaluating the effectiveness of visual-biofeedback. Results: We could monitor ACF variation in real time and confirmed that the data was correlated with both respiratory flow data and external respiratory signal. Even under abdominal compression, in addition, it was possible to make the subjects successfully follow the guide patterns using the visual biofeedback system. Conclusion: The developed real time ACF monitoring system was found to be functional as intended and consistent. With the capability of both providing real time surrogating signal under compression and enabling visual-biofeedback, it is considered that the system would improve the quality of respiratory motion management in radiation therapy. This research was supported by the Mid-career Researcher Program through NRF funded by the Ministry of Science, ICT & Future Planning of Korea (NRF-2014R1A2A1A10050270) and by the Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springmeyer, R R; Brugger, E; Cook, R
The Data group provides data analysis and visualization support to its customers. This consists primarily of the development and support of VisIt, a data analysis and visualization tool. Support ranges from answering questions about the tool, providing classes on how to use the tool, and performing data analysis and visualization for customers. The Information Management and Graphics Group supports and develops tools that enhance our ability to access, display, and understand large, complex data sets. Activities include applying visualization software for large scale data exploration; running video production labs on two networks; supporting graphics libraries and tools for end users;more » maintaining PowerWalls and assorted other displays; and developing software for searching and managing scientific data. Researchers in the Center for Applied Scientific Computing (CASC) work on various projects including the development of visualization techniques for large scale data exploration that are funded by the ASC program, among others. The researchers also have LDRD projects and collaborations with other lab researchers, academia, and industry. The IMG group is located in the Terascale Simulation Facility, home to Dawn, Atlas, BGL, and others, which includes both classified and unclassified visualization theaters, a visualization computer floor and deployment workshop, and video production labs. We continued to provide the traditional graphics group consulting and video production support. We maintained five PowerWalls and many other displays. We deployed a 576-node Opteron/IB cluster with 72 TB of memory providing a visualization production server on our classified network. We continue to support a 128-node Opteron/IB cluster providing a visualization production server for our unclassified systems and an older 256-node Opteron/IB cluster for the classified systems, as well as several smaller clusters to drive the PowerWalls. The visualization production systems includes NFS servers to provide dedicated storage for data analysis and visualization. The ASC projects have delivered new versions of visualization and scientific data management tools to end users and continue to refine them. VisIt had 4 releases during the past year, ending with VisIt 2.0. We released version 2.4 of Hopper, a Java application for managing and transferring files. This release included a graphical disk usage view which works on all types of connections and an aggregated copy feature for quickly transferring massive datasets quickly and efficiently to HPSS. We continue to use and develop Blockbuster and Telepath. Both the VisIt and IMG teams were engaged in a variety of movie production efforts during the past year in addition to the development tasks.« less
Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization
NASA Astrophysics Data System (ADS)
Johnston, Semay; Renambot, Luc; Sauter, Daniel
2013-03-01
Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.
Experiences with hypercube operating system instrumentation
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Rudolph, David C.
1989-01-01
The difficulties in conceptualizing the interactions among a large number of processors make it difficult both to identify the sources of inefficiencies and to determine how a parallel program could be made more efficient. This paper describes an instrumentation system that can trace the execution of distributed memory parallel programs by recording the occurrence of parallel program events. The resulting event traces can be used to compile summary statistics that provide a global view of program performance. In addition, visualization tools permit the graphic display of event traces. Visual presentation of performance data is particularly useful, indeed, necessary for large-scale parallel computers; the enormous volume of performance data mandates visual display.
A telephone survey of low vision services in U.S. schools for the blind and visually impaired.
Kran, Barry S; Wright, Darick W
2008-07-01
The scope of clinical low vision services and access to comprehensive eye care through U.S. schools for the blind and visually impaired is not well known. Advances in medicine and educational trends toward inclusion have resulted in higher numbers of visually impaired children with additional cognitive, motor, and developmental impairments enrolled in U.S. schools for the blind and visually impaired. The availability and frequency of eye care and vision education services for individuals with visual and multiple impairments at schools for the blind is explored in this report using data collected in a 24-item telephone survey from 35 of 42 identified U.S. schools for the blind. The results indicate that 54% of the contacted schools (19) offer clinical eye examinations. All of these schools provide eye care to the 6 to 21 age group, yet only 10 schools make this service available to children from birth to 3 years of age. In addition, two thirds of these schools discontinue eye care when the students graduate or transition to adult service agencies. The majority (94.7%) of eye care is provided by optometrists or a combination of optometry and ophthalmology, and 42.1% of these schools have an affiliation with an optometric institution. When there is a collaborative agreement, clinical services for students are available more frequently. The authors find that questions emerge regarding access to care, identification of appropriate models of care, and training of educational/medical/optometric personnel to meet the needs of a very complex patient population.
CytoCom: a Cytoscape app to visualize, query and analyse disease comorbidity networks.
Moni, Mohammad Ali; Xu, Haoming; Liò, Pietro
2015-03-15
CytoCom is an interactive plugin for Cytoscape that can be used to search, explore, analyse and visualize human disease comorbidity network. It represents disease-disease associations in terms of bipartite graphs and provides International Classification of Diseases, Ninth Revision (ICD9)-centric and disease name centric views of disease information. It allows users to find associations between diseases based on the two measures: Relative Risk (RR) and [Formula: see text]-correlation values. In the disease network, the size of each node is based on the prevalence of that disease. CytoCom is capable of clustering disease network based on the ICD9 disease category. It provides user-friendly access that facilitates exploration of human diseases, and finds additional associated diseases by double-clicking a node in the existing network. Additional comorbid diseases are then connected to the existing network. It is able to assist users for interpretation and exploration of the human diseases by a variety of built-in functions. Moreover, CytoCom permits multi-colouring of disease nodes according to standard disease classification for expedient visualization. © The Author 2014. Published by Oxford University Press.
Perucho, Beatriz; Micó, Vicente
2014-01-01
Progressive addition lenses (PALs) are engraved with permanent marks at standardized locations in order to guarantee correct centering and alignment throughout the manufacturing and mounting processes. Out of the production line, engraved marks provide useful information about the PAL as well as act as locator marks to re-ink again the removable marks. Even though those marks should be visible by simple visual inspection with the naked eye, engraving marks are often faint and weak, obscured by scratches, and partially occluded and difficult to recognize on tinted or antireflection-coated lenses. Here, we present an extremely simple optical device (named as wavefront holoscope) for visualization and characterization of permanent marks in PAL based on digital in-line holography. Essentially, a point source of coherent light illuminates the engraved mark placed just before a CCD camera that records a classical Gabor in-line hologram. The recorded hologram is then digitally processed to provide a set of high-contrast images of the engraved marks. Experimental results are presented showing the applicability of the proposed method as a new ophthalmic instrument for visualization and characterization of engraved marks in PALs.
Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael
2011-01-01
Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669
Visual circuits of the avian telencephalon: evolutionary implications
NASA Technical Reports Server (NTRS)
Shimizu, T.; Bowers, A. N.
1999-01-01
Birds and primates are vertebrates that possess the most advanced, efficient visual systems. Although lineages leading to these two classes were separated about 300 million years ago, there are striking similarities in their underlying neural mechanisms for visual processing. This paper discusses such similarities with special emphasis on the visual circuits in the avian telencephalon. These similarities include: (1) the existence of two parallel visual pathways and their distinct telencephalic targets, (2) anatomical and functional segregation within the visual pathways, (3) laminar organization of the telencephalic targets of the pathways (e.g. striate cortex in primates), and (4) possible interactions between multiple visual areas. Additional extensive analyses are necessary to determine whether these similarities are due to inheritance from a common ancestral stock or the consequences of convergent evolution based on adaptive response to similar selective pressures. Nevertheless, such a comparison is important to identify the general and specific principles of visual processing in amniotes (reptiles, birds, and mammals). Furthermore, these principles in turn will provide a critical foundation for understanding the evolution of the brain in amniotes.
Li, Lei; Sahi, Sunil K; Peng, Mingying; Lee, Eric B; Ma, Lun; Wojtowicz, Jennifer L; Malin, John H; Chen, Wei
2016-02-10
We developed new optic devices - singly-doped luminescence glasses and nanoparticle-coated lenses that convert UV light to visible light - for improvement of visual system functions. Tb(3+) or Eu(3+) singly-doped borate glasses or CdS-quantum dot (CdS-QD) coated lenses efficiently convert UV light to 542 nm or 613 nm wavelength narrow-band green or red light, or wide-spectrum white light, and thereby provide extra visible light to the eye. In zebrafish (wild-type larvae and adult control animals, retinal degeneration mutants, and light-induced photoreceptor cell degeneration models), the use of Tb(3+) or Eu(3+) doped luminescence glass or CdS-QD coated glass lenses provide additional visible light to the rod and cone photoreceptor cells, and thereby improve the visual system functions. The data provide proof-of-concept for the future development of optic devices for improvement of visual system functions in patients who suffer from photoreceptor cell degeneration or related retinal diseases.
Huisingh, Carrie; McGwin, Gerald; Owsley, Cynthia
2017-01-01
Background Many studies on vision and driving cessation have relied on measures of sensory function, which are insensitive to the higher order cognitive aspects of visual processing. The purpose of this study was to examine the association between traditional measures of visual sensory function and higher order visual processing skills with incident driving cessation in a population-based sample of older drivers. Methods Two thousand licensed drivers aged ≥70 were enrolled and followed-up for three years. Tests for central vision and visual processing were administered at baseline and included visual acuity, contrast sensitivity, sensitivity in the driving visual field, visual processing speed (Useful Field of View (UFOV) Subtest 2 and Trails B), and spatial ability measured by the Visual Closure Subtest of the Motor-free Visual Perception Test. Participants self-reported the month and year of driving cessation and provided a reason for cessation. Cox proportional hazards models were used to generate crude and adjusted hazard ratios with 95% confidence intervals between visual functioning characteristics and risk of driving cessation over a three-year period. Results During the study period, 164 participants stopped driving which corresponds to a cumulative incidence of 8.5%. Impaired contrast sensitivity, visual fields, visual processing speed (UFOVand Trails B), and spatial ability were significant risk factors for subsequent driving cessation after adjusting for age, gender, marital status, number of medical conditions, and miles driven. Visual acuity impairment was not associated with driving cessation. Medical problems (63%), specifically musculoskeletal and neurological problems, as well as vision problems (17%) were cited most frequently as the reason for driving cessation. Conclusion Assessment of cognitive and visual functioning can provide useful information about subsequent risk of driving cessation among older drivers. In addition, a variety of factors, not just vision, influenced the decision to stop driving and may be amenable to intervention. PMID:27353969
Does my step look big in this? A visual illusion leads to safer stepping behaviour.
Elliott, David B; Vale, Anna; Whitaker, David; Buckley, John G
2009-01-01
Tripping is a common factor in falls and a typical safety strategy to avoid tripping on steps or stairs is to increase foot clearance over the step edge. In the present study we asked whether the perceived height of a step could be increased using a visual illusion and whether this would lead to the adoption of a safer stepping strategy, in terms of greater foot clearance over the step edge. The study also addressed the controversial question of whether motor actions are dissociated from visual perception. 21 young, healthy subjects perceived the step to be higher in a configuration of the horizontal-vertical illusion compared to a reverse configuration (p = 0.01). During a simple stepping task, maximum toe elevation changed by an amount corresponding to the size of the visual illusion (p<0.001). Linear regression analyses showed highly significant associations between perceived step height and maximum toe elevation for all conditions. The perceived height of a step can be manipulated using a simple visual illusion, leading to the adoption of a safer stepping strategy in terms of greater foot clearance over a step edge. In addition, the strong link found between perception of a visual illusion and visuomotor action provides additional support to the view that the original, controversial proposal by Goodale and Milner (1992) of two separate and distinct visual streams for perception and visuomotor action should be re-evaluated.
Retrieving the unretrievable in electronic imaging systems: emotions, themes, and stories
NASA Astrophysics Data System (ADS)
Joergensen, Corinne
1999-05-01
New paradigms such as 'affective computing' and user-based research are extending the realm of facets traditionally addressed in IR systems. This paper builds on previous research reported to the electronic imaging community concerning the need to provide access to more abstract attributes of images than those currently amenable to a variety of content-based and text-based indexing techniques. Empirical research suggest that, for visual materials, in addition to standard bibliographic data and broad subject, and in addition to such visually perceptual attributes such as color, texture, shape, and position or focal point, additional access points such as themes, abstract concepts, emotions, stories, and 'people-related' information such as social status would be useful in image retrieval. More recent research demonstrates that similar results are also obtained with 'fine arts' images, which generally have no access provided for these types of attributes. Current efforts to match image attributes as revealed in empirical research with those addressed both in current textural and content-based indexing systems are discussed, as well as the need for new representations for image attributes and for collaboration among diverse communities of researchers.
Real-time recording and classification of eye movements in an immersive virtual environment.
Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary
2013-10-10
Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.
Real-time recording and classification of eye movements in an immersive virtual environment
Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary
2013-01-01
Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements. PMID:24113087
Immersive Molecular Visualization with Omnidirectional Stereoscopic Ray Tracing and Remote Rendering
Stone, John E.; Sherman, William R.; Schulten, Klaus
2016-01-01
Immersive molecular visualization provides the viewer with intuitive perception of complex structures and spatial relationships that are of critical interest to structural biologists. The recent availability of commodity head mounted displays (HMDs) provides a compelling opportunity for widespread adoption of immersive visualization by molecular scientists, but HMDs pose additional challenges due to the need for low-latency, high-frame-rate rendering. State-of-the-art molecular dynamics simulations produce terabytes of data that can be impractical to transfer from remote supercomputers, necessitating routine use of remote visualization. Hardware-accelerated video encoding has profoundly increased frame rates and image resolution for remote visualization, however round-trip network latencies would cause simulator sickness when using HMDs. We present a novel two-phase rendering approach that overcomes network latencies with the combination of omnidirectional stereoscopic progressive ray tracing and high performance rasterization, and its implementation within VMD, a widely used molecular visualization and analysis tool. The new rendering approach enables immersive molecular visualization with rendering techniques such as shadows, ambient occlusion lighting, depth-of-field, and high quality transparency, that are particularly helpful for the study of large biomolecular complexes. We describe ray tracing algorithms that are used to optimize interactivity and quality, and we report key performance metrics of the system. The new techniques can also benefit many other application domains. PMID:27747138
Sligte, Ilja G; Wokke, Martijn E; Tesselaar, Johannes P; Scholte, H Steven; Lamme, Victor A F
2011-05-01
To guide our behavior in successful ways, we often need to rely on information that is no longer in view, but maintained in visual short-term memory (VSTM). While VSTM is usually broken down into iconic memory (brief and high-capacity store) and visual working memory (sustained, yet limited-capacity store), recent studies have suggested the existence of an additional and intermediate form of VSTM that depends on activity in extrastriate cortex. In previous work, we have shown that this fragile form of VSTM can be dissociated from iconic memory. In the present study, we provide evidence that fragile VSTM is different from visual working memory as magnetic stimulation of the right dorsolateral prefrontal cortex (DLPFC) disrupts visual working memory, while leaving fragile VSTM intact. In addition, we observed that people with high DLPFC activity had superior working memory capacity compared to people with low DLPFC activity, and only people with high DLPFC activity really showed a reduction in working memory capacity in response to magnetic stimulation. Altogether, this study shows that VSTM consists of three stages that have clearly different characteristics and rely on different neural structures. On the methodological side, we show that it is possible to predict individual susceptibility to magnetic stimulation based on functional MRI activity. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
An aftereffect of adaptation to mean size
Corbett, Jennifer E.; Wurnitsch, Nicole; Schwartz, Alex; Whitney, David
2013-01-01
The visual system rapidly represents the mean size of sets of objects. Here, we investigated whether mean size is explicitly encoded by the visual system, along a single dimension like texture, numerosity, and other visual dimensions susceptible to adaptation. Observers adapted to two sets of dots with different mean sizes, presented simultaneously in opposite visual fields. After adaptation, two test patches replaced the adapting dot sets, and participants judged which test appeared to have the larger average dot diameter. They generally perceived the test that replaced the smaller mean size adapting set as being larger than the test that replaced the larger adapting set. This differential aftereffect held for single test dots (Experiment 2) and high-pass filtered displays (Experiment 3), and changed systematically as a function of the variance of the adapting dot sets (Experiment 4), providing additional support that mean size is adaptable, and therefore explicitly encoded dimension of visual scenes. PMID:24348083
NASA Astrophysics Data System (ADS)
Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.
2014-12-01
Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new ideas. This presentation will provide an update of the recent enhancements of the NEIS architecture and visualization capabilities, challenges faced, as well as ongoing research activities related to this project.
Visualizing vascular structures in virtual environments
NASA Astrophysics Data System (ADS)
Wischgoll, Thomas
2013-01-01
In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.
Flow-visualization study of the X-29A aircraft at high angles of attack using a 1/48-scale model
NASA Technical Reports Server (NTRS)
Cotton, Stacey J.; Bjarke, Lisa J.
1994-01-01
A water-tunnel study on a 1/48-scale model of the X-29A aircraft was performed at the NASA Dryden Flow Visualization Facility. The water-tunnel test enhanced the results of the X-29A flight tests by providing flow-visualization data for comparison and insights into the aerodynamic characteristics of the aircraft. The model was placed in the water tunnel at angles of attack of 20 to 55 deg. and with angles of sideslip from 0 to 5 deg. In general, flow-visualization techniques provided useful information on vortex formation, separation, and breakdown and their role in yaw asymmetries and tail buffeting. Asymmetric forebody vortices were observed at angles of attack greater than 30 deg. with 0 deg. sideslip and greater than 20 deg. with 5 deg. sideslip. While the asymmetric flows observed in the water tunnel did not agree fully with the flight data, they did show some of the same trends. In addition, the flow visualization indicated that the interaction of forebody vortices and the wing wake at angles of attack between 20 and 35 deg. may cause vertical-tail buffeting observed in flight.
Short-term memory for spatial configurations in the tactile modality: a comparison with vision.
Picard, Delphine; Monnier, Catherine
2009-11-01
This study investigates the role of acquisition constraints on the short-term retention of spatial configurations in the tactile modality in comparison with vision. It tests whether the sequential processing of information inherent to the tactile modality could account for limitation in short-term memory span for tactual-spatial information. In addition, this study investigates developmental aspects of short-term memory for tactual- and visual-spatial configurations. A total of 144 child and adult participants were assessed for their memory span in three different conditions: tactual, visual, and visual with a limited field of view. The results showed lower tactual-spatial memory span than visual-spatial, regardless of age. However, differences in memory span observed between the tactile and visual modalities vanished when the visual processing of information occurred within a limited field. These results provide evidence for an impact of acquisition constraints on the retention of spatial information in the tactile modality in both childhood and adulthood.
Neural activity reveals perceptual grouping in working memory.
Rabbitt, Laura R; Roberts, Daniel M; McDonald, Craig G; Peterson, Matthew S
2017-03-01
There is extensive evidence that the contralateral delay activity (CDA), a scalp recorded event-related brain potential, provides a reliable index of the number of objects held in visual working memory. Here we present evidence that the CDA not only indexes visual object working memory, but also the number of locations held in spatial working memory. In addition, we demonstrate that the CDA can be predictably modulated by the type of encoding strategy employed. When individual locations were held in working memory, the pattern of CDA modulation mimicked previous findings for visual object working memory. Specifically, CDA amplitude increased monotonically until working memory capacity was reached. However, when participants were instructed to group individual locations to form a constellation, the CDA was prolonged and reached an asymptote at two locations. This result provides neural evidence for the formation of a unitary representation of multiple spatial locations. Published by Elsevier B.V.
Emotion and Perception: The Role of Affective Information
Zadra, Jonathan R.; Clore, Gerald L.
2011-01-01
Visual perception and emotion are traditionally considered separate domains of study. In this article, however, we review research showing them to be less separable that usually assumed. In fact, emotions routinely affect how and what we see. Fear, for example, can affect low-level visual processes, sad moods can alter susceptibility to visual illusions, and goal-directed desires can change the apparent size of goal-relevant objects. In addition, the layout of the physical environment, including the apparent steepness of a hill and the distance to the ground from a balcony can both be affected by emotional states. We propose that emotions provide embodied information about the costs and benefits of anticipated action, information that can be used automatically and immediately, circumventing the need for cogitating on the possible consequences of potential actions. Emotions thus provide a strong motivating influence on how the environment is perceived. PMID:22039565
Flow visualization methods for field test verification of CFD analysis of an open gloveport
Strons, Philip; Bailey, James L.
2017-01-01
Anemometer readings alone cannot provide a complete picture of air flow patterns at an open gloveport. Having a means to visualize air flow for field tests in general provides greater insight by indicating direction in addition to the magnitude of the air flow velocities in the region of interest. Furthermore, flow visualization is essential for Computational Fluid Dynamics (CFD) verification, where important modeling assumptions play a significant role in analyzing the chaotic nature of low-velocity air flow. A good example is shown Figure 1, where an unexpected vortex pattern occurred during a field test that could not have been measuredmore » relying only on anemometer readings. Here by, observing and measuring the patterns of the smoke flowing into the gloveport allowed the CFD model to be appropriately updated to match the actual flow velocities in both magnitude and direction.« less
Comparison of Middle Ear Visualization With Endoscopy and Microscopy.
Bennett, Marc L; Zhang, Dongqing; Labadie, Robert F; Noble, Jack H
2016-04-01
The primary goal of chronic ear surgery is the creation of a safe, clean dry ear. For cholesteatomas, complete removal of disease is dependent on visualization. Conventional microscopy is adequate for most dissection, but various subregions of the middle ear are better visualized with endoscopy. The purpose of the present study was to quantitatively assess the improved visualization that endoscopes afford as compared with operating microscopes. Microscopic and endoscopic views were simulated using a three-dimensional model developed from temporal bone scans. Surface renderings of the ear canal and middle ear subsegments were defined and the percentage of visualization of each middle ear subsegment, both with and without ossicles, was then determined for the microscope as well as for 0-, 30-, and 45-degree endoscopes. Using this information, we analyzed which mode of visualization is best suited for dissection within a particular anatomical region. Using a 0-degree scope provides significantly more visualization of every subregion, except the antrum, compared with a microscope. In addition, angled scopes permit visualizing significantly more surface area of every subregion of the middle ear than straight scopes or microscopes. Endoscopes offer advantages for cholesteatoma dissection in difficult-to-visualize areas including the sinus tympani and epitympanum.
Top-down influence on the visual cortex of the blind during sensory substitution.
Murphy, Matthew C; Nau, Amy C; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S; Chan, Kevin C
2016-01-15
Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. Copyright © 2015 Elsevier Inc. All rights reserved.
Visual Purple, the Next Generation Crisis Management Decision Training Tool
2001-09-01
talents of professional Hollywood screenwriters during the scripting and writing process of the simulations. Additionally, cinematic techniques learned...cultural, and language experts for research development. Additionally, GTA provides country specific support in script writing and cinematic resources as...The result is an entirely new dimension of realism that traditional exercises often fail to capture. The scenario requires the participant to make the
Coates, Sarah J; Kvedar, Joseph; Granstein, Richard D
2015-04-01
Telemedicine is the use of telecommunications technology to support health care at a distance. Dermatology relies on visual cues that are easily captured by imaging technologies, making it ideally suited for this care model. Advances in telecommunications technology have made it possible to deliver high-quality skin care when patient and provider are separated by both time and space. Most recently, mobile devices that connect users through cellular data networks have enabled teledermatologists to instantly communicate with primary care providers throughout the world. The availability of teledermoscopy provides an additional layer of visual information to enhance the quality of teleconsultations. Teledermatopathology has become increasingly feasible because of advances in digitization of entire microscopic slides and robot-assisted microscopy. Barriers to additional expansion of these services include underdeveloped infrastructure in remote regions, fragmented electronic medical records, and varying degrees of reimbursement. Teleconsultants also confront special legal and ethical challenges as they work toward building a global network of practicing physicians. Copyright © 2014 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Bo; State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Science, Beijing 100101; Xia Jing
Physiological and behavioral studies have demonstrated that a number of visual functions such as visual acuity, contrast sensitivity, and motion perception can be impaired by acute alcohol exposure. The orientation- and direction-selective responses of cells in primary visual cortex are thought to participate in the perception of form and motion. To investigate how orientation selectivity and direction selectivity of neurons are influenced by acute alcohol exposure in vivo, we used the extracellular single-unit recording technique to examine the response properties of neurons in primary visual cortex (A17) of adult cats. We found that alcohol reduces spontaneous activity, visual evoked unitmore » responses, the signal-to-noise ratio, and orientation selectivity of A17 cells. In addition, small but detectable changes in both the preferred orientation/direction and the bandwidth of the orientation tuning curve of strongly orientation-biased A17 cells were observed after acute alcohol administration. Our findings may provide physiological evidence for some alcohol-related deficits in visual function observed in behavioral studies.« less
Performance evaluation of a kinesthetic-tactual display
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.; Dunn, R. S.
1982-01-01
Simulator studies demonstrated the feasibility of using kinesthetic-tactual (KT) displays for providing collective and cyclic command information, and suggested that KT displays may increase pilot workload capability. A dual-axis laboratory tracking task suggested that beyond reduction in visual scanning, there may be additional sensory or cognitive benefits to the use of multiple sensory modalities. Single-axis laboratory tracking tasks revealed performance with a quickened KT display to be equivalent to performance with a quickened visual display for a low frequency sum-of-sinewaves input. In contrast, an unquickened KT display was inferior to an unquickened visual display. Full scale simulator studies and/or inflight testing are recommended to determine the generality of these results.
Abe, Shigeaki; Hyono, Atsushi; Kawai, Koji; Yonezawa, Tetsu
2014-03-01
In this study, we investigated conductivity preparation for scanning electron microscope (SEM) observation that used novel asymmetrical choline-type room temperature ionic liquids (RTIL). By immersion in only an RTIL solution, clear SEM images of several types of biological samples were successfully observed. In addition, we could visualize protozoans using RTILs without any dilution. These results suggested that the asymmetrical choline-type RTILs used in this study are suitable for visualizing of biological samples by SEM. Treatment without the need for dilution can obviate the need for adjusting the RTIL concentration and provide for a rapid and easy conductivity treatment for insulating samples.
VANLO - Interactive visual exploration of aligned biological networks
Brasch, Steffen; Linsen, Lars; Fuellen, Georg
2009-01-01
Background Protein-protein interaction (PPI) is fundamental to many biological processes. In the course of evolution, biological networks such as protein-protein interaction networks have developed. Biological networks of different species can be aligned by finding instances (e.g. proteins) with the same common ancestor in the evolutionary process, so-called orthologs. For a better understanding of the evolution of biological networks, such aligned networks have to be explored. Visualization can play a key role in making the various relationships transparent. Results We present a novel visualization system for aligned biological networks in 3D space that naturally embeds existing 2D layouts. In addition to displaying the intra-network connectivities, we also provide insight into how the individual networks relate to each other by placing aligned entities on top of each other in separate layers. We optimize the layout of the entire alignment graph in a global fashion that takes into account inter- as well as intra-network relationships. The layout algorithm includes a step of merging aligned networks into one graph, laying out the graph with respect to application-specific requirements, splitting the merged graph again into individual networks, and displaying the network alignment in layers. In addition to representing the data in a static way, we also provide different interaction techniques to explore the data with respect to application-specific tasks. Conclusion Our system provides an intuitive global understanding of aligned PPI networks and it allows the investigation of key biological questions. We evaluate our system by applying it to real-world examples documenting how our system can be used to investigate the data with respect to these key questions. Our tool VANLO (Visualization of Aligned Networks with Layout Optimization) can be accessed at . PMID:19821976
Helioviewer: A Web 2.0 Tool for Visualizing Heterogeneous Heliophysics Data
NASA Astrophysics Data System (ADS)
Hughitt, V. K.; Ireland, J.; Lynch, M. J.; Schmeidel, P.; Dimitoglou, G.; Müeller, D.; Fleck, B.
2008-12-01
Solar physics datasets are becoming larger, richer, more numerous and more distributed. Feature/event catalogs (describing objects of interest in the original data) are becoming important tools in navigating these data. In the wake of this increasing influx of data and catalogs there has been a growing need for highly sophisticated tools for accessing and visualizing this wealth of information. Helioviewer is a novel tool for integrating and visualizing disparate sources of solar and Heliophysics data. Taking advantage of the newly available power of modern web application frameworks, Helioviewer merges image and feature catalog data, and provides for Heliophysics data a familiar interface not unlike Google Maps or MapQuest. In addition to streamlining the process of combining heterogeneous Heliophysics datatypes such as full-disk images and coronagraphs, the inclusion of visual representations of automated and human-annotated features provides the user with an integrated and intuitive view of how different factors may be interacting on the Sun. Currently, Helioviewer offers images from The Extreme ultraviolet Imaging Telescope (EIT), The Large Angle and Spectrometric COronagraph experiment (LASCO) and the Michelson Doppler Imager (MDI) instruments onboard The Solar and Heliospheric Observatory (SOHO), as well as The Transition Region and Coronal Explorer (TRACE). Helioviewer also incorporates feature/event information from the LASCO CME List, NOAA Active Regions, CACTus CME and Type II Radio Bursts feature/event catalogs. The project is undergoing continuous development with many more data sources and additional functionality planned for the near future.
Multidimensional data analysis in immunophenotyping.
Loken, M R
2001-05-01
The complexity of cell populations requires careful selection of reagents to detect cells of interest and distinguish them from other types. Additional reagents are frequently used to provide independent criteria for cell identification. Two or three monoclonal antibodies in combination with forward and right-angle light scatter generate a data set that is difficult to visualize because the data must be represented in four- or five-dimensional space. The separation between cell populations provided by the multiple characteristics is best visualized by multidimensional analysis using all parameters simultaneously to identify populations within the resulting hyperspace. Groups of cells are distinguished based on a combination of characteristics not apparent in any usual two-dimensional representation of the data.
Polarization visualization of changes of anisotropic meat structure
NASA Astrophysics Data System (ADS)
Blokhina, Anastasia A.; Ryzhova, Victoria A.; Kleshchenok, Maksim A.; Lobanova, Anastasiya Y.
2017-06-01
The main aspect in developing methods for optical diagnostics and visualization of biological tissues using polarized radiation is the transformation analysis of the state of light polarization when it is scattered by the medium. The polarization characteristic spatial distributions of the detected scattered radiation, in particular the degree of depolarization, have a pronounced anisotropy. The presence of optical anisotropy can provide valuable additional information on the structural features of the biological object and its physiological status. Analysis of the polarization characteristics of the scattered radiation of biological tissues in some cases provides a qualitatively new results in the study of biological samples. These results can be used in medicine and food industry.
Intermediate addition multifocals provide safe stair ambulation with adequate 'short-term' reading.
Elliott, David B; Hotchkiss, John; Scally, Andrew J; Foster, Richard; Buckley, John G
2016-01-01
A recent randomised controlled trial indicated that providing long-term multifocal wearers with a pair of distance single-vision spectacles for use outside the home reduced falls risk in active older people. However, it also found that participants disliked continually switching between using two pairs of glasses and adherence to the intervention was poor. In this study we determined whether intermediate addition multifocals (which could be worn most of the time inside and outside the home and thus avoid continual switching) could provide similar gait safety on stairs to distance single vision spectacles whilst also providing adequate 'short-term' reading and near vision. Fourteen healthy long-term multifocal wearers completed stair ascent and descent trials over a 3-step staircase wearing intermediate and full addition bifocals and progression-addition lenses (PALs) and single-vision distance spectacles. Gait safety/caution was assessed using foot clearance measurements (toe on ascent, heel on descent) over the step edges and ascent and descent duration. Binocular near visual acuity, critical print size and reading speed were measured using Bailey-Lovie near charts and MNRead charts at 40 cm. Gait safety/caution measures were worse with full addition bifocals and PALs compared to intermediate bifocals and PALs. The intermediate PALs provided similar gait ascent/descent measures to those with distance single-vision spectacles. The intermediate addition PALs also provided good reading ability: Near word acuity and MNRead critical print size were better with the intermediate addition PALs than with the single-vision lenses (p < 0.0001), with a mean near visual acuity of 0.24 ± 0.13 logMAR (~N5.5) which is satisfactory for most near vision tasks when performed for a short period of time. The better ability to 'spot read' with the intermediate addition PALs compared to single-vision spectacles suggests that elderly individuals might better comply with the use of intermediate addition PALs outside the home. A lack of difference in gait parameters for the intermediate addition PALs compared to distance single-vision spectacles suggests they could be usefully used to help prevent falls in older well-adapted full addition PAL wearers. A randomised controlled trial to investigate the usefulness of intermediate multifocals in preventing falls seems warranted. © 2015 The Authors Ophthalmic and Physiological Optics published by John Wiley & Sons Ltd on behalf of College of Optometrists.
NASA Astrophysics Data System (ADS)
Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong
2015-03-01
Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.
Off-the-shelf Control of Data Analysis Software
NASA Astrophysics Data System (ADS)
Wampler, S.
The Gemini Project must provide convenient access to data analysis facilities to a wide user community. The international nature of this community makes the selection of data analysis software particularly interesting, with staunch advocates of systems such as ADAM and IRAF among the users. Additionally, the continuing trends towards increased use of networked systems and distributed processing impose additional complexity. To meet these needs, the Gemini Project is proposing the novel approach of using low-cost, off-the-shelf software to abstract out both the control and distribution of data analysis from the functionality of the data analysis software. For example, the orthogonal nature of control versus function means that users might select analysis routines from both ADAM and IRAF as appropriate, distributing these routines across a network of machines. It is the belief of the Gemini Project that this approach results in a system that is highly flexible, maintainable, and inexpensive to develop. The Khoros visualization system is presented as an example of control software that is currently available for providing the control and distribution within a data analysis system. The visual programming environment provided with Khoros is also discussed as a means to providing convenient access to this control.
NASA Astrophysics Data System (ADS)
Kassin, A.; Cody, R. P.; Barba, M.; Gaylord, A. G.; Manley, W. F.; Score, R.; Escarzaga, S. M.; Tweedie, C. E.
2016-12-01
The Arctic Research Mapping Application (ARMAP; http://armap.org/) is a suite of online applications and data services that support Arctic science by providing project tracking information (who's doing what, when and where in the region) for United States Government funded projects. In collaboration with 17 research agencies, project locations are displayed in a visually enhanced web mapping application. Key information about each project is presented along with links to web pages that provide additional information, including links to data where possible. The latest ARMAP iteration has i) reworked the search user interface (UI) to enable multiple filters to be applied in user-driven queries and ii) implemented ArcGIS Javascript API 4.0 to allow for deployment of 3D maps directly into a users web-browser and enhanced customization of popups. Module additions include i) a dashboard UI powered by a back-end Apache SOLR engine to visualize data in intuitive and interactive charts; and ii) a printing module that allows users to customize maps and export these to different formats (pdf, ppt, gif and jpg). New reference layers and an updated ship tracks layer have also been added. These improvements have been made to improve discoverability, enhance logistics coordination, identify geographic gaps in research/observation effort, and foster enhanced collaboration among the research community. Additionally, ARMAP can be used to demonstrate past, present, and future research effort supported by the U.S. Government.
The look of royalty: visual and odour signals of reproductive status in a paper wasp
Tannure-Nascimento, Ivelize C; Nascimento, Fabio S; Zucchi, Ronaldo
2008-01-01
Reproductive conflicts within animal societies occur when all females can potentially reproduce. In social insects, these conflicts are regulated largely by behaviour and chemical signalling. There is evidence that presence of signals, which provide direct information about the quality of the reproductive females would increase the fitness of all parties. In this study, we present an association between visual and chemical signals in the paper wasp Polistes satan. Our results showed that in nest-founding phase colonies, variation of visual signals is linked to relative fertility, while chemical signals are related to dominance status. In addition, experiments revealed that higher hierarchical positions were occupied by subordinates with distinct proportions of cuticular hydrocarbons and distinct visual marks. Therefore, these wasps present cues that convey reliable information of their reproductive status. PMID:18682372
The look of royalty: visual and odour signals of reproductive status in a paper wasp.
Tannure-Nascimento, Ivelize C; Nascimento, Fabio S; Zucchi, Ronaldo
2008-11-22
Reproductive conflicts within animal societies occur when all females can potentially reproduce. In social insects, these conflicts are regulated largely by behaviour and chemical signalling. There is evidence that presence of signals, which provide direct information about the quality of the reproductive females would increase the fitness of all parties. In this study, we present an association between visual and chemical signals in the paper wasp Polistes satan. Our results showed that in nest-founding phase colonies, variation of visual signals is linked to relative fertility, while chemical signals are related to dominance status. In addition, experiments revealed that higher hierarchical positions were occupied by subordinates with distinct proportions of cuticular hydrocarbons and distinct visual marks. Therefore, these wasps present cues that convey reliable information of their reproductive status.
NV: Nessus Vulnerability Visualization for the Web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Lane; Spahn, Riley B; Iannacone, Michael D
2012-01-01
Network vulnerability is a critical component of network se- curity. Yet vulnerability analysis has received relatively lit- tle attention from the security visualization community. In this paper we describe nv, a web-based Nessus vulnerability visualization. Nv utilizes treemaps and linked histograms to allow system administrators to discover, analyze, and man- age vulnerabilities on their networks. In addition to visual- izing single Nessus scans, nv supports the analysis of sequen- tial scans by showing which vulnerabilities have been fixed, remain open, or are newly discovered. Nv was also designed to operate completely in-browser, to avoid sending sensitive data to outside servers.more » We discuss the design of nv, as well as provide case studies demonstrating vulnerability analysis workflows which include a multiple-node testbed and data from the 2011 VAST Challenge.« less
Effects of visual attention on chromatic and achromatic detection sensitivities.
Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko
2014-05-01
Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.
Sockeye: A 3D Environment for Comparative Genomics
Montgomery, Stephen B.; Astakhova, Tamara; Bilenky, Mikhail; Birney, Ewan; Fu, Tony; Hassel, Maik; Melsopp, Craig; Rak, Marcin; Robertson, A. Gordon; Sleumer, Monica; Siddiqui, Asim S.; Jones, Steven J.M.
2004-01-01
Comparative genomics techniques are used in bioinformatics analyses to identify the structural and functional properties of DNA sequences. As the amount of available sequence data steadily increases, the ability to perform large-scale comparative analyses has become increasingly relevant. In addition, the growing complexity of genomic feature annotation means that new approaches to genomic visualization need to be explored. We have developed a Java-based application called Sockeye that uses three-dimensional (3D) graphics technology to facilitate the visualization of annotation and conservation across multiple sequences. This software uses the Ensembl database project to import sequence and annotation information from several eukaryotic species. A user can additionally import their own custom sequence and annotation data. Individual annotation objects are displayed in Sockeye by using custom 3D models. Ensembl-derived and imported sequences can be analyzed by using a suite of multiple and pair-wise alignment algorithms. The results of these comparative analyses are also displayed in the 3D environment of Sockeye. By using the Java3D API to visualize genomic data in a 3D environment, we are able to compactly display cross-sequence comparisons. This provides the user with a novel platform for visualizing and comparing genomic feature organization. PMID:15123592
Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.
Williams, Jason A
2012-06-01
The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.
The role of 3-D interactive visualization in blind surveys of H I in galaxies
NASA Astrophysics Data System (ADS)
Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.
2015-09-01
Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.
NASA Astrophysics Data System (ADS)
Aghdasi, Nava; Li, Yangming; Berens, Angelique; Moe, Kris S.; Bly, Randall A.; Hannaford, Blake
2015-03-01
Minimally invasive neuroendoscopic surgery provides an alternative to open craniotomy for many skull base lesions. These techniques provides a great benefit to the patient through shorter ICU stays, decreased post-operative pain and quicker return to baseline function. However, density of critical neurovascular structures at the skull base makes planning for these procedures highly complex. Furthermore, additional surgical portals are often used to improve visualization and instrument access, which adds to the complexity of pre-operative planning. Surgical approach planning is currently limited and typically involves review of 2D axial, coronal, and sagittal CT and MRI images. In addition, skull base surgeons manually change the visualization effect to review all possible approaches to the target lesion and achieve an optimal surgical plan. This cumbersome process relies heavily on surgeon experience and it does not allow for 3D visualization. In this paper, we describe a rapid pre-operative planning system for skull base surgery using the following two novel concepts: importance-based highlight and mobile portal. With this innovation, critical areas in the 3D CT model are highlighted based on segmentation results. Mobile portals allow surgeons to review multiple potential entry portals in real-time with improved visualization of critical structures located inside the pathway. To achieve this we used the following methods: (1) novel bone-only atlases were manually generated, (2) orbits and the center of the skull serve as features to quickly pre-align the patient's scan with the atlas, (3) deformable registration technique was used for fine alignment, (4) surgical importance was assigned to each voxel according to a surgical dictionary, and (5) pre-defined transfer function was applied to the processed data to highlight important structures. The proposed idea was fully implemented as independent planning software and additional data are used for verification and validation. The experimental results show: (1) the proposed methods provided greatly improved planning efficiency while optimal surgical plans were successfully achieved, (2) the proposed methods successfully highlighted important structures and facilitated planning, (3) the proposed methods require shorter processing time than classical segmentation algorithms, and (4) these methods can be used to improve surgical safety for surgical robots.
Electronic Travel Aids for Blind Persons.
ERIC Educational Resources Information Center
Hill, Everett W.; Bradfield, Anna L.
1984-01-01
The article describes application for visually impaired persons of widely used Electronic Travel Aids--the Lindsay Russell Pathsounder, the Mowat Sensor, the Sonicguide, and the C-5 Laser Cane. In addition, a research review provides insight into the issues affecting future use of the devices. (Author/CL)
Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration
NASA Astrophysics Data System (ADS)
Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola
In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.
Evaluation of an Innovative Digital Assessment Tool in Dental Anatomy.
Lam, Matt T; Kwon, So Ran; Qian, Fang; Denehy, Gerald E
2015-05-01
The E4D Compare software is an innovative tool that provides immediate feedback to students' projects and competencies. It should provide consistent scores even when different scanners are used which may have inherent subtle differences in calibration. This study aimed to evaluate potential discrepancies in evaluation using the E4D Compare software based on four different NEVO scanners in dental anatomy projects. Additionally, correlation between digital and visual scores was evaluated. Thirty-five projects of maxillary left central incisors were evaluated. Among these, thirty wax-ups were performed by four operators and five consisted of standard dentoform teeth. Five scores were obtained for each project: one from an instructor that visually graded the project and from four different NEVO scanners. A faculty involved in teaching the dental anatomy course blindly scored the 35 projects. One operator scanned all projects to four NEVO scanners (D4D Technologies, Richardson, TX, USA). The images were aligned to the gold standard, and tolerance set at 0.3 mm to generate a score. The score reflected percentage match between the project and the gold standard. One-way ANOVA with repeated measures was used to determine whether there was a significant difference in scores among the four NEVO scanners. Paired-sample t-test was used to detect any difference between visual scores and the average scores of the four NEVO scanners. Pearson's correlation test was used to assess the relationship between visual and average scores of NEVO scanners. There was no significant difference in mean scores among four different NEVO scanners [F(3, 102) = 2.27, p = 0.0852 one-way ANOVA with repeated measures]. Moreover, the data provided strong evidence that a significant difference existed between visual and digital scores (p = 0.0217; a paired - sample t-test). Mean visual scores were significantly lower than digital scores (72.4 vs 75.1). Pearson's correlation coefficient of 0.85 indicated a strong correlation between visual and digital scores (p < 0.0001). The E4D Compare software provides consistent scores even when different scanners are used and correlates well with visual scores. The use of innovative digital assessment tools in dental education is promising with the E4D Compare software correlating well with visual scores and providing consistent scores even when different scanners are used.
Interference, aging, and visuospatial working memory: the role of similarity.
Rowe, Gillian; Hasher, Lynn; Turcotte, Josée
2010-11-01
Older adults' performance on working memory (WM) span tasks is known to be negatively affected by the buildup of proactive interference (PI) across trials. PI has been reduced in verbal tasks and performance increased by presenting distinctive items across trials. In addition, reversing the order of trial presentation (i.e., starting with the longest sets first) has been shown to reduce PI in both verbal and visuospatial WM span tasks. We considered whether making each trial visually distinct would improve older adults' visuospatial WM performance, and whether combining the 2 PI-reducing manipulations, distinct trials and reversed order of presentation, would prove additive, thus providing even greater benefit. Forty-eight healthy older adults (age range = 60-77 years) completed 1 of 3 versions of a computerized Corsi block test. For 2 versions of the task, trials were either all visually similar or all visually distinct, and were presented in the standard ascending format (shortest set size first). In the third version, visually distinct trials were presented in a reverse order of presentation (longest set size first). Span scores were reliably higher in the ascending version for visually distinct compared with visually similar trials, F(1, 30) = 4.96, p = .03, η² = .14. However, combining distinct trials and a descending format proved no more beneficial than administering the descending format alone. Our findings suggest that a more accurate measurement of the visuospatial WM span scores of older adults (and possibly neuropsychological patients) might be obtained by reducing within-test interference.
Sophisticated Communication in the Brazilian Torrent Frog Hylodes japi.
de Sá, Fábio P; Zina, Juliana; Haddad, Célio F B
2016-01-01
Intraspecific communication in frogs plays an important role in the recognition of conspecifics in general and of potential rivals or mates in particular and therefore with relevant consequences for pre-zygotic reproductive isolation. We investigate intraspecific communication in Hylodes japi, an endemic Brazilian torrent frog with territorial males and an elaborate courtship behavior. We describe its repertoire of acoustic signals as well as one of the most complex repertoires of visual displays known in anurans, including five new visual displays. Previously unknown in frogs, we also describe a bimodal inter-sexual communication system where the female stimulates the male to emit a courtship call. As another novelty for frogs, we show that in addition to choosing which limb to signal with, males choose which of their two vocal sacs will be used for visual signaling. We explain how and why this is accomplished. Control of inflation also provides additional evidence that vocal sac movement and color must be important for visual communication, even while producing sound. Through the current knowledge on visual signaling in Neotropical torrent frogs (i.e. hylodids), we discuss and highlight the behavioral diversity in the family Hylodidae. Our findings indicate that communication in species of Hylodes is undoubtedly more sophisticated than we expected and that visual communication in anurans is more widespread than previously thought. This is especially true in tropical regions, most likely due to the higher number of species and phylogenetic groups and/or to ecological factors, such as higher microhabitat diversity.
Sophisticated Communication in the Brazilian Torrent Frog Hylodes japi
de Sá, Fábio P.; Zina, Juliana; Haddad, Célio F. B.
2016-01-01
Intraspecific communication in frogs plays an important role in the recognition of conspecifics in general and of potential rivals or mates in particular and therefore with relevant consequences for pre-zygotic reproductive isolation. We investigate intraspecific communication in Hylodes japi, an endemic Brazilian torrent frog with territorial males and an elaborate courtship behavior. We describe its repertoire of acoustic signals as well as one of the most complex repertoires of visual displays known in anurans, including five new visual displays. Previously unknown in frogs, we also describe a bimodal inter-sexual communication system where the female stimulates the male to emit a courtship call. As another novelty for frogs, we show that in addition to choosing which limb to signal with, males choose which of their two vocal sacs will be used for visual signaling. We explain how and why this is accomplished. Control of inflation also provides additional evidence that vocal sac movement and color must be important for visual communication, even while producing sound. Through the current knowledge on visual signaling in Neotropical torrent frogs (i.e. hylodids), we discuss and highlight the behavioral diversity in the family Hylodidae. Our findings indicate that communication in species of Hylodes is undoubtedly more sophisticated than we expected and that visual communication in anurans is more widespread than previously thought. This is especially true in tropical regions, most likely due to the higher number of species and phylogenetic groups and/or to ecological factors, such as higher microhabitat diversity. PMID:26760304
NASA Astrophysics Data System (ADS)
Titov, A. G.; Okladnikov, I. G.; Gordov, E. P.
2017-11-01
The use of large geospatial datasets in climate change studies requires the development of a set of Spatial Data Infrastructure (SDI) elements, including geoprocessing and cartographical visualization web services. This paper presents the architecture of a geospatial OGC web service system as an integral part of a virtual research environment (VRE) general architecture for statistical processing and visualization of meteorological and climatic data. The architecture is a set of interconnected standalone SDI nodes with corresponding data storage systems. Each node runs a specialized software, such as a geoportal, cartographical web services (WMS/WFS), a metadata catalog, and a MySQL database of technical metadata describing geospatial datasets available for the node. It also contains geospatial data processing services (WPS) based on a modular computing backend realizing statistical processing functionality and, thus, providing analysis of large datasets with the results of visualization and export into files of standard formats (XML, binary, etc.). Some cartographical web services have been developed in a system’s prototype to provide capabilities to work with raster and vector geospatial data based on OGC web services. The distributed architecture presented allows easy addition of new nodes, computing and data storage systems, and provides a solid computational infrastructure for regional climate change studies based on modern Web and GIS technologies.
Olechnovic, Kliment; Margelevicius, Mindaugas; Venclovas, Ceslovas
2011-03-01
We present Voroprot, an interactive cross-platform software tool that provides a unique set of capabilities for exploring geometric features of protein structure. Voroprot allows the construction and visualization of the Apollonius diagram (also known as the additively weighted Voronoi diagram), the Apollonius graph, protein alpha shapes, interatomic contact surfaces, solvent accessible surfaces, pockets and cavities inside protein structure. Voroprot is available for Windows, Linux and Mac OS X operating systems and can be downloaded from http://www.ibt.lt/bioinformatics/voroprot/.
Bhirde, Ashwin A; Sousa, Alioscka A; Patel, Vyomesh; Azari, Afrouz A; Gutkind, J Silvio; Leapman, Richard D; Rusling, James F
2009-01-01
Aims To image the distribution of drug molecules attached to single-wall carbon nanotubes (SWNTs). Materials & methods Herein we report the use of scanning transmission electron microscopy (STEM) for atomic scale visualization and quantitation of single platinum-based drug molecules attached to SWNTs designed for targeted drug delivery. Fourier transform infrared spectroscopy and energy-dispersive x-ray spectroscopy were used for characterization of the SWNT drug conjugates. Results Z-contrast STEM imaging enabled visualization of the first-line anticancer drug cisplatin on the nanotubes at single molecule level. The identity and presence of cisplatin on the nanotubes was confirmed using energy-dispersive x-ray spectroscopy and Fourier transform infrared spectroscopy. STEM tomography was also used to provide additional insights concerning the nanotube conjugates. Finally, our observations provide a rationale for exploring the use of SWNT bioconjugates to selectively target and kill squamous cancer cells. Conclusion Z-contrast STEM imaging provides a means for direct visualization of heavy metal containing molecules (i.e., cisplatin) attached to surfaces of carbon SWNTs along with distribution and quantitation. PMID:19839812
Gordo, D G M; Espigolan, R; Tonussi, R L; Júnior, G A F; Bresolin, T; Magalhães, A F Braga; Feitosa, F L; Baldi, F; Carvalheiro, R; Tonhati, H; de Oliveira, H N; Chardulo, L A L; de Albuquerque, L G
2016-05-01
The objective of this study was to determine whether visual scores used as selection criteria in Nellore breeding programs are effective indicators of carcass traits measured after slaughter. Additionally, this study evaluated the effect of different structures of the relationship matrix ( and ) on the estimation of genetic parameters and on the prediction accuracy of breeding values. There were 13,524 animals for visual scores of conformation (CS), finishing precocity (FP), and muscling (MS) and 1,753, 1,747, and 1,564 for LM area (LMA), backfat thickness (BF), and HCW, respectively. Of these, 1,566 animals were genotyped using a high-density panel containing 777,962 SNP. Six analyses were performed using multitrait animal models, each including the 3 visual scores and 1 carcass trait. For the visual scores, the model included direct additive genetic and residual random effects and the fixed effects of contemporary group (defined by year of birth, management group at yearling, and farm) and the linear effect of age of animal at yearling. The same model was used for the carcass traits, replacing the effect of age of animal at yearling with the linear effect of age of animal at slaughter. The variance and covariance components were estimated by the REML method in analyses using the numerator relationship matrix () or combining the genomic and the numerator relationship matrices (). The heritability estimates for the visual scores obtained with the 2 methods were similar and of moderate magnitude (0.23-0.34), indicating that these traits should response to direct selection. The heritabilities for LMA, BF, and HCW were 0.13, 0.07, and 0.17, respectively, using matrix and 0.29, 0.16, and 0.23, respectively, using matrix . The genetic correlations between the visual scores and carcass traits were positive, and higher correlations were generally obtained when matrix was used. Considering the difficulties and cost of measuring carcass traits postmortem, visual scores of CS, FP, and MS could be used as selection criteria to improve HCW, BF, and LMA. The use of genomic information permitted the detection of greater additive genetic variability for LMA and BF. For HCW, the high magnitude of the genetic correlations with visual scores was probably sufficient to recover genetic variability. The methods provided similar breeding value accuracies, especially for the visual scores.
Flow visualization for investigating stator losses in a multistage axial compressor
NASA Astrophysics Data System (ADS)
Smith, Natalie R.; Key, Nicole L.
2015-05-01
The methodology and implementation of a powder-paint-based flow visualization technique along with the illuminated flow physics are presented in detail for application in a three-stage axial compressor. While flow visualization often accompanies detailed studies, the turbomachinery literature lacks a comprehensive study which both utilizes flow visualization to interrupt the flow field and explains the intricacies of execution. Lessons learned for obtaining high-quality images of surface flow patterns are discussed in this study. Fluorescent paint is used to provide clear, high-contrast pictures of the recirculation regions on shrouded vane rows. An edge-finding image processing procedure is implemented to provide a quantitative measure of vane-to-vane variability in flow separation, which is approximately 7 % of the suction surface length for Stator 1. Results include images of vane suction side corner separations from all three stages at three loading conditions. Additionally, streakline patterns obtained experimentally are compared with those calculated from computational models. Flow physics associated with vane clocking and increased rotor tip clearance and their implications to stator loss are also investigated with this flow visualization technique. With increased rotor tip clearance, the vane surface flow patterns show a shift to larger separations and more radial flow at the tip. Finally, the effects of instrumentation on the flow field are highlighted.
A review of visual perception mechanisms that regulate rapid adaptive camouflage in cuttlefish.
Chiao, Chuan-Chin; Chubb, Charles; Hanlon, Roger T
2015-09-01
We review recent research on the visual mechanisms of rapid adaptive camouflage in cuttlefish. These neurophysiologically complex marine invertebrates can camouflage themselves against almost any background, yet their ability to quickly (0.5-2 s) alter their body patterns on different visual backgrounds poses a vexing challenge: how to pick the correct body pattern amongst their repertoire. The ability of cuttlefish to change appropriately requires a visual system that can rapidly assess complex visual scenes and produce the motor responses-the neurally controlled body patterns-that achieve camouflage. Using specifically designed visual backgrounds and assessing the corresponding body patterns quantitatively, we and others have uncovered several aspects of scene variation that are important in regulating cuttlefish patterning responses. These include spatial scale of background pattern, background intensity, background contrast, object edge properties, object contrast polarity, object depth, and the presence of 3D objects. Moreover, arm postures and skin papillae are also regulated visually for additional aspects of concealment. By integrating these visual cues, cuttlefish are able to rapidly select appropriate body patterns for concealment throughout diverse natural environments. This sensorimotor approach of studying cuttlefish camouflage thus provides unique insights into the mechanisms of visual perception in an invertebrate image-forming eye.
Effects of body lean and visual information on the equilibrium maintenance during stance.
Duarte, Marcos; Zatsiorsky, Vladimir M
2002-09-01
Maintenance of equilibrium was tested in conditions when humans assume different leaning postures during upright standing. Subjects ( n=11) stood in 13 different body postures specified by visual center of pressure (COP) targets within their base of support (BOS). Different types of visual information were tested: continuous presentation of visual target, no vision after target presentation, and with simultaneous visual feedback of the COP. The following variables were used to describe the equilibrium maintenance: the mean of the COP position, the area of the ellipse covering the COP sway, and the resultant median frequency of the power spectral density of the COP displacement. The variability of the COP displacement, quantified by the COP area variable, increased when subjects occupied leaning postures, irrespective of the kind of visual information provided. This variability also increased when vision was removed in relation to when vision was present. Without vision, drifts in the COP data were observed which were larger for COP targets farther away from the neutral position. When COP feedback was given in addition to the visual target, the postural control system did not control stance better than in the condition with only visual information. These results indicate that the visual information is used by the postural control system at both short and long time scales.
Indocyanine green fluorescence imaging in hepatobiliary surgery.
Majlesara, Ali; Golriz, Mohammad; Hafezi, Mohammadreza; Saffari, Arash; Stenau, Esther; Maier-Hein, Lena; Müller-Stich, Beat P; Mehrabi, Arianeb
2017-03-01
Indocyanine green (ICG) is a fluorescent dye that has been widely used for fluorescence imaging during hepatobiliary surgery. ICG is injected intravenously, selectively taken up by the liver, and then secreted into the bile. The catabolism and fluorescence properties of ICG permit a wide range of visualization methods in hepatobiliary surgery. We have characterized the applications of ICG during hepatobiliary surgery into: 1) liver mapping, 2) cholangiography, 3) tumor visualization, and 4) partial liver graft evaluation. In this literature review, we summarize the current understanding of ICG use during hepatobiliary surgery. Intra-operative ICG fluorescence imaging is a safe, simple, and feasible method that improves the visualization of hepatobiliary anatomy and liver tumors. Intravenous administration of ICG is not toxic and avoids the drawbacks of conventional imaging. In addition, it reduces post-operative complications without any known side effects. ICG fluorescence imaging provides a safe and reliable contrast for extra-hepatic cholangiography when detecting intra-hepatic bile leakage following liver resection. In addition, liver tumors can be visualized and well-differentiated hepatocellular carcinoma tumors can be accurately identified. Moreover, vascular reconstruction and outflow can be evaluated following partial liver transplantation. However, since tissue penetration is limited to 5-10mm, deeper tissue cannot be visualized using this method. Many instances of false positive or negative results have been reported, therefore further characterization is required. Copyright © 2016 Elsevier B.V. All rights reserved.
Van, Khai; Hides, Julie A; Richardson, Carolyn A
2006-12-01
Randomized controlled trial. To determine if the provision of visual biofeedback using real-time ultrasound imaging enhances the ability to activate the multifidus muscle. Increasingly clinicians are using real-time ultrasound as a form of biofeedback when re-educating muscle activation. The effectiveness of this form of biofeedback for the multifidus muscle has not been reported. Healthy subjects were randomly divided into groups that received different forms of biofeedback. All subjects received clinical instruction on how to activate the multifidus muscle isometrically prior to testing and verbal feedback regarding the amount of multifidus contraction, which occurred during 10 repetitions (acquisition phase). In addition, 1 group received visual biofeedback (watched the multifidus muscle contract) using real-time ultrasound imaging. All subjects were reassessed a week later (retention phase). Subjects from both groups improved their voluntary contraction of the multifidus muscle in the acquisition phase (P<.001) and the ability to recruit the multifidus muscle differed between groups (P<.05), with subjects in the group that received visual ultrasound biofeedback achieving greater improvements. In addition, the group that received visual ultrasound biofeedback retained their improvement in performance from week 1 to week 2 (P>.90), whereas the performance of the other group decreased (P<.05). Real-time ultrasound imaging can be used to provide visual biofeedback and improve performance and retention in the ability to activate the multifidus muscle in healthy subjects.
Lenz, Robin; Enders, Kristina; Stedmon, Colin A; Mackenzie, David M A; Nielsen, Torkel Gissel
2015-11-15
Identification and characterisation of microplastic (MP) is a necessary step to evaluate their concentrations, chemical composition and interactions with biota. MP ≥10μm diameter filtered from below the sea surface in the European and subtropical North Atlantic were simultaneously identified by visual microscopy and Raman micro-spectroscopy. Visually identified particles below 100μm had a significantly lower percentage confirmed by Raman than larger ones indicating that visual identification alone is inappropriate for studies on small microplastics. Sixty-eight percent of visually counted MP (n=1279) were spectroscopically confirmed being plastic. The percentage varied with type, colour and size of the MP. Fibres had a higher success rate (75%) than particles (64%). We tested Raman micro-spectroscopy applicability for MP identification with respect to varying chemical composition (additives), degradation state and organic matter coating. Partially UV-degraded post-consumer plastics provided identifiable Raman spectra for polymers most common among marine MP, i.e. polyethylene and polypropylene. Copyright © 2015 Elsevier Ltd. All rights reserved.
ICASE/LaRC Symposium on Visualizing Time-Varying Data
NASA Technical Reports Server (NTRS)
Banks, D. C. (Editor); Crockett, T. W. (Editor); Stacy, K. (Editor)
1996-01-01
Time-varying datasets present difficult problems for both analysis and visualization. For example, the data may be terabytes in size, distributed across mass storage systems at several sites, with time scales ranging from femtoseconds to eons. In response to these challenges, ICASE and NASA Langley Research Center, in cooperation with ACM SIGGRAPH, organized the first symposium on visualizing time-varying data. The purpose was to bring the producers of time-varying data together with visualization specialists to assess open issues in the field, present new solutions, and encourage collaborative problem-solving. These proceedings contain the peer-reviewed papers which were presented at the symposium. They cover a broad range of topics, from methods for modeling and compressing data to systems for visualizing CFD simulations and World Wide Web traffic. Because the subject matter is inherently dynamic, a paper proceedings cannot adequately convey all aspects of the work. The accompanying video proceedings provide additional context for several of the papers.
A versatile stereoscopic visual display system for vestibular and oculomotor research.
Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S
1998-01-01
Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.
[Clinical Neuropsychology of Dementia with Lewy Bodies].
Nagahama, Yasuhiro
2016-02-01
Dementia with Lewy bodies (DLB) shows lesser memory impairment and more severe visuospatial disability than Alzheimer disease (AD). Although deficits in both consolidation and retrieval underlie the memory impairment, retrieval deficit is predominant in DLB. Visuospatial dysfunctions in DLB are related to the impairments in both ventral and dorsal streams of higher visual information processing, and lower visual processing in V1/V2 may also be impaired. Attention and executive functions are more widely disturbed in DLB than in AD. Imitation of finger gestures is impaired more frequently in DLB than in other mild dementia, and provides additional information for diagnosis of mild dementia, especially for DLB. Pareidolia, which lies between hallucination and visual misperception, is found frequently in DLB, but its mechanism is still under investigation.
Dineen, Brendan; Gilbert, Clare E; Rabiu, Mansur; Kyari, Fatima; Mahdi, Abdull M; Abubakar, Tafida; Ezelum, Christian C; Gabriel, Entekume; Elhassan , Elizabeth; Abiose, Adenike; Faal, Hannah; Jiya, Jonathan Y; Ozemela, Chinenyem P; Lee, Pak Sang; Gudlavalleti, Murthy VS
2008-01-01
Background Despite having the largest population in Africa, Nigeria has no accurate population based data to plan and evaluate eye care services. A national survey was undertaken to estimate the prevalence and determine the major causes of blindness and low vision. This paper presents the detailed methodology used during the survey. Methods A nationally representative sample of persons aged 40 years and above was selected. Children aged 10–15 years and individuals aged <10 or 16–39 years with visual impairment were also included if they lived in households with an eligible adult. All participants had their height, weight, and blood pressure measured followed by assessment of presenting visual acuity, refractokeratomery, A-scan ultrasonography, visual fields and best corrected visual acuity. Anterior and posterior segments of each eye were examined with a torch and direct ophthalmoscope. Participants with visual acuity of < = 6/12 in one or both eyes underwent detailed examination including applanation tonometry, dilated slit lamp biomicroscopy, lens grading and fundus photography. All those who had undergone cataract surgery were refracted and best corrected vision recorded. Causes of visual impairment by eye and for the individual were determined using a clinical algorithm recommended by the World Health Organization. In addition, 1 in 7 adults also underwent a complete work up as described for those with vision < = 6/12 for constructing a normative data base for Nigerians. Discussion The field work for the study was completed in 30 months over the period 2005–2007 and covered 305 clusters across the entire country. Concurrently persons 40+ years were examined to form a normative data base. Analysis of the data is currently underway. Conclusion The methodology used was robust and adequate to provide estimates on the prevalence and causes of blindness in Nigeria. The survey would also provide information on barriers to accessing services, quality of life of visually impaired individuals and also provide normative data for Nigerian eyes. PMID:18808712
Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf
2015-01-01
To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results.
Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf
2015-01-01
Background To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Methods and Results Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results. PMID:25915061
Loss of Neurofilament Labeling in the Primary Visual Cortex of Monocularly Deprived Monkeys
Duffy, Kevin R.; Livingstone, Margaret S.
2009-01-01
Visual experience during early life is important for the development of neural organizations that support visual function. Closing one eye (monocular deprivation) during this sensitive period can cause a reorganization of neural connections within the visual system that leaves the deprived eye functionally disconnected. We have assessed the pattern of neurofilament labeling in monocularly deprived macaque monkeys to examine the possibility that a cytoskeleton change contributes to deprivation-induced reorganization of neural connections within the primary visual cortex (V-1). Monocular deprivation for three months starting around the time of birth caused a significant loss of neurofilament labeling within deprived-eye ocular dominance columns. Three months of monocular deprivation initiated in adulthood did not produce a loss of neurofilament labeling. The evidence that neurofilament loss was found only when deprivation occurred during the sensitive period supports the notion that the loss permits restructuring of deprived-eye neural connections within the visual system. These results provide evidence that, in addition to reorganization of LGN inputs, the intrinsic circuitry of V-1 neurons is altered when monocular deprivation occurs early in development. PMID:15563721
Extending the Lunar Mapping and Modeling Portal - New Capabilities and New Worlds
NASA Technical Reports Server (NTRS)
Day, B.; Law, E.; Arevalo, E.; Bui, B.; Chang, G.; Dodge, K.; Kim, R.; Malhotra, S.; Sadaqathullah, S.; Schmidt, G.;
2015-01-01
NASA's Lunar Mapping and Modeling Portal (LMMP) provides a web-based Portal and a suite of interactive visualization and analysis tools to enable mission planners, lunar scientists, and engineers to access mapped lunar data products from past and current lunar missions (http://lmmp.nasa.gov). During the past year, the capabilities and data served by LMMP have been significantly expanded. New interfaces are providing improved ways to access and visualize data. At the request of NASA's Science Mission Directorate, LMMP's technology and capabilities are now being extended to additional planetary bodies. New portals for Vesta and Mars are the first of these new products to be released. This presentation will provide an overview of LMMP, Vesta Trek, and Mars Trek, demonstrate their uses and capabilities, highlight new features, and preview coming enhancements.
Extending the Lunar Mapping and Modeling Portal - New Capabilities and New Worlds
NASA Astrophysics Data System (ADS)
Day, B.; Law, E.; Arevalo, E.; Bui, B.; Chang, G.; Dodge, K.; Kim, R.; Malhotra, S.; Sadaqathullah, S.; Schmidt, G.; Bailey, B.
2015-10-01
NASA's Lunar Mapping and Modeling Portal (LMMP) provides a web-based Portal and a suite of interactive visualization and analysis tools to enable mission planners, lunar scientists, and engineers to access mapped lunar data products from past and current lunar missions (http://lmmp.nasa.gov). During the past year, the capabilities and data served by LMMP have been significantly expanded. New interfaces are providing improved ways to access and visualize data. At the request of NASA's Science Mission Directorate, LMMP's technology and capabilities are now being extended to additional planetary bodies. New portals for Vesta and Mars are the first of these new products to be released. This presentation will provide an overview of LMMP, Vesta Trek, and Mars Trek, demonstrate their uses and capabilities, highlight new features, and preview coming enhancements.
NASA Astrophysics Data System (ADS)
Schiltz, Holly Kristine
Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors' modeled visualization artifacts had on students. No patterns emerged from the passive observation of visualization artifacts in lecture or recitation, but the need to elicit visual information from students was made clear. Deconstruction proved to be a valuable method for instruction and assessment of visual information. Three strategies for using deconstruction in teaching were distilled from the lessons and observations of the student focus groups: begin with observations of what is given in an image and what it's composed of, identify the relationships between components to find additional operations in different environments about the molecule, and deconstructing steps of challenging questions can reveal mistakes. An intervention was developed to teach students to use deconstruction and verbalization to analyze complex visualization tasks and employ the principles of the theoretical framework. The activities were scaffolded to introduce increasingly challenging concepts to students, but also support them as they learned visually demanding chemistry concepts. Several themes were observed in the analysis of the visualization activities. Students used deconstruction by documenting which parts of the images were useful for interpretation of the visual. Students identified valid patterns and rules within the images, which signified understanding of arrangement of information presented in the representation. Successful strategy communication was identified when students documented personal strategies that allowed them to complete the activity tasks. Finally, students demonstrated the ability to extend symmetry skills to advanced applications they had not previously seen. This work shows how the use of deconstruction and verbalization may have a great impact on how students master difficult topics and combined, they offer students a powerful strategy to approach visually demanding chemistry problems and to the instructor a unique insight to mentally constructed strategies.
Rougier, Patrice R; Boudrahem, Samir
2017-09-01
The technique of additional visual feedback has been shown to significantly decrease the center of pressure (CP) displacements of a standing subject. Body-weight asymmetry is known to increase postural instability due to difficulties in coordinating the reaction forces exerted under each foot and is often a cardinal feature of various neurological and traumatic diseases. To examine the possible interactions between additional visual feedback and body-weight asymmetry effects, healthy adults were recruited in a protocol with and without additional visual feedback, with different levels of body-weight asymmetry. CP displacements under each foot were recorded and used to compute the resultant CP displacements (CP Res ) and to estimate vertically projected center of gravity (CG v ) and CP Res -CG v displacements. Overall, six conditions were randomly proposed combining two factors: asymmetry with three BW percentage distributions (50/50, 35/65 and 20/80; left/right leg) and feedback (with or without additional VFB). The additional visual feedback technique principally reduces CG v displacements, whereas asymmetry increases CP Res -CG v displacements along the mediolateral axis. Some effects on plantar CP displacements were also observed, but only under the unloaded foot. Interestingly, no interaction between additional visual feedback and body-weight asymmetry was reported. These results suggest that the various postural effects that ensue from manipulating additional visual feedback parameters, shown previously in healthy subjects in various studies, could also apply independently of the level of asymmetry. Visual feedback effects could be observed in patients presenting weight-bearing asymmetries. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
The Anatomical and Functional Organization of the Human Visual Pulvinar
Pinsk, Mark A.; Kastner, Sabine
2015-01-01
The pulvinar is the largest nucleus in the primate thalamus and contains extensive, reciprocal connections with visual cortex. Although the anatomical and functional organization of the pulvinar has been extensively studied in old and new world monkeys, little is known about the organization of the human pulvinar. Using high-resolution functional magnetic resonance imaging at 3 T, we identified two visual field maps within the ventral pulvinar, referred to as vPul1 and vPul2. Both maps contain an inversion of contralateral visual space with the upper visual field represented ventrally and the lower visual field represented dorsally. vPul1 and vPul2 border each other at the vertical meridian and share a representation of foveal space with iso-eccentricity lines extending across areal borders. Additional, coarse representations of contralateral visual space were identified within ventral medial and dorsal lateral portions of the pulvinar. Connectivity analyses on functional and diffusion imaging data revealed a strong distinction in thalamocortical connectivity between the dorsal and ventral pulvinar. The two maps in the ventral pulvinar were most strongly connected with early and extrastriate visual areas. Given the shared eccentricity representation and similarity in cortical connectivity, we propose that these two maps form a distinct visual field map cluster and perform related functions. The dorsal pulvinar was most strongly connected with parietal and frontal areas. The functional and anatomical organization observed within the human pulvinar was similar to the organization of the pulvinar in other primate species. SIGNIFICANCE STATEMENT The anatomical organization and basic response properties of the visual pulvinar have been extensively studied in nonhuman primates. Yet, relatively little is known about the functional and anatomical organization of the human pulvinar. Using neuroimaging, we found multiple representations of visual space within the ventral human pulvinar and extensive topographically organized connectivity with visual cortex. This organization is similar to other nonhuman primates and provides additional support that the general organization of the pulvinar is consistent across the primate phylogenetic tree. These results suggest that the human pulvinar, like other primates, is well positioned to regulate corticocortical communication. PMID:26156987
Electron microscopy approach for the visualization of the epithelial and endothelial glycocalyx.
Chevalier, L; Selim, J; Genty, D; Baste, J M; Piton, N; Boukhalfa, I; Hamzaoui, M; Pareige, P; Richard, V
2017-06-01
This study presents a methodological approach for the visualization of the glycocalyx by electron microscopy. The glycocalyx is a three dimensional network mainly composed of glycolipids, glycoproteins and proteoglycans associated with the plasma membrane. Since less than a decade, the epithelial and endothelial glycocalyx proved to play an important role in physiology and pathology, increasing its research interest especially in vascular functions. Therefore, visualization of the glycocalyx requires reliable techniques and its preservation remains challenging due to its fragile and dynamic organization, which is highly sensitive to the different process steps for electron microscopy sampling. In this study, chemical fixation was performed by perfusion as a good alternative to conventional fixation. Additional lanthanum nitrate in the fixative enhances staining of the glycocalyx in transmission electron microscopy bright field and improves its visualization by detecting the elastic scattered electrons, thus providing a chemical contrast. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Honeycomb: Visual Analysis of Large Scale Social Networks
NASA Astrophysics Data System (ADS)
van Ham, Frank; Schulz, Hans-Jörg; Dimicco, Joan M.
The rise in the use of social network sites allows us to collect large amounts of user reported data on social structures and analysis of this data could provide useful insights for many of the social sciences. This analysis is typically the domain of Social Network Analysis, and visualization of these structures often proves invaluable in understanding them. However, currently available visual analysis tools are not very well suited to handle the massive scale of this network data, and often resolve to displaying small ego networks or heavily abstracted networks. In this paper, we present Honeycomb, a visualization tool that is able to deal with much larger scale data (with millions of connections), which we illustrate by using a large scale corporate social networking site as an example. Additionally, we introduce a new probability based network metric to guide users to potentially interesting or anomalous patterns and discuss lessons learned during design and implementation.
Chang, Cheng; Xu, Kaikun; Guo, Chaoping; Wang, Jinxia; Yan, Qi; Zhang, Jian; He, Fuchu; Zhu, Yunping
2018-05-22
Compared with the numerous software tools developed for identification and quantification of -omics data, there remains a lack of suitable tools for both downstream analysis and data visualization. To help researchers better understand the biological meanings in their -omics data, we present an easy-to-use tool, named PANDA-view, for both statistical analysis and visualization of quantitative proteomics data and other -omics data. PANDA-view contains various kinds of analysis methods such as normalization, missing value imputation, statistical tests, clustering and principal component analysis, as well as the most commonly-used data visualization methods including an interactive volcano plot. Additionally, it provides user-friendly interfaces for protein-peptide-spectrum representation of the quantitative proteomics data. PANDA-view is freely available at https://sourceforge.net/projects/panda-view/. 1987ccpacer@163.com and zhuyunping@gmail.com. Supplementary data are available at Bioinformatics online.
Chylothorax diagnosis: can the clinical chemistry laboratory do more?
Gibbons, Stephen M; Ahmed, Farhan
2015-01-01
Chylothorax is a rare anatomical disruption of the thoracic duct associated with a significant degree of morbidity and mortality. Diagnosis usually relies upon lipid analysis and visual inspection of the pleural fluid. However, this may be subject to incorrect interpretation. The aim of this study was to compare pleural fluid lipid analysis and visual inspection against lipoprotein electrophoresis. Nine pleural effusion samples suspected of being chylothorax were analysed. A combination of fluid lipid analysis and visual inspection was compared with lipoprotein electrophoresis for the detection of chylothorax. There was 89% concordance between the two methods. Using lipoprotein electrophoresis as gold standard, calculated sensitivity, specificity, negative predictive value and positive predictive value for lipid analysis/visual inspection were 83%, 100%, 100% and 75%, respectively. Examination of pleural effusion samples by lipoprotein electrophoresis may provide important additional information in the diagnosis of chylothorax. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Immunological multimetal deposition for rapid visualization of sweat fingerprints.
He, Yayun; Xu, Linru; Zhu, Yu; Wei, Qianhui; Zhang, Meiqin; Su, Bin
2014-11-10
A simple method termed immunological multimetal deposition (iMMD) was developed for rapid visualization of sweat fingerprints with bare eyes, by combining the conventional MMD with the immunoassay technique. In this approach, antibody-conjugated gold nanoparticles (AuNPs) were used to specifically interact with the corresponding antigens in the fingerprint residue. The AuNPs serve as the nucleation sites for autometallographic deposition of silver particles from the silver staining solution, generating a dark ridge pattern for visual detection. Using fingerprints inked with human immunoglobulin G (hIgG), we obtained the optimal formulation of iMMD, which was then successfully applied to visualize sweat fingerprints through the detection of two secreted polypeptides, epidermal growth factor and lysozyme. In comparison with the conventional MMD, iMMD is faster and can provide additional information than just identification. Moreover, iMMD is facile and does not need expensive instruments. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
CircosVCF: circos visualization of whole-genome sequence variations stored in VCF files.
Drori, E; Levy, D; Smirin-Yosef, P; Rahimi, O; Salmon-Divon, M
2017-05-01
Visualization of whole-genomic variations in a meaningful manner assists researchers in gaining new insights into the underlying data, especially when it comes in the context of whole genome comparisons. CircosVCF is a web based visualization tool for genome-wide variant data described in VCF files, using circos plots. The user friendly interface of CircosVCF supports an interactive design of the circles in the plot, and the integration of additional information such as experimental data or annotations. The provided visualization capabilities give a broad overview of the genomic relationships between genomes, and allow identification of specific meaningful SNPs regions. CircosVCF was implemented in JavaScript and is available at http://www.ariel.ac.il/research/fbl/software. malisa@ariel.ac.il. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Garcia-Retamero, Rocio; Hoffrage, Ulrich
2013-04-01
Doctors and patients have difficulty inferring the predictive value of a medical test from information about the prevalence of a disease and the sensitivity and false-positive rate of the test. Previous research has established that communicating such information in a format the human mind is adapted to-namely natural frequencies-as compared to probabilities, boosts accuracy of diagnostic inferences. In a study, we investigated to what extent these inferences can be improved-beyond the effect of natural frequencies-by providing visual aids. Participants were 81 doctors and 81 patients who made diagnostic inferences about three medical tests on the basis of information about prevalence of a disease, and the sensitivity and false-positive rate of the tests. Half of the participants received the information in natural frequencies, while the other half received the information in probabilities. Half of the participants only received numerical information, while the other half additionally received a visual aid representing the numerical information. In addition, participants completed a numeracy scale. Our study showed three important findings: (1) doctors and patients made more accurate inferences when information was communicated in natural frequencies as compared to probabilities; (2) visual aids boosted accuracy even when the information was provided in natural frequencies; and (3) doctors were more accurate in their diagnostic inferences than patients, though differences in accuracy disappeared when differences in numerical skills were controlled for. Our findings have important implications for medical practice as they suggest suitable ways to communicate quantitative medical data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Innovative Visualization Techniques applied to a Flood Scenario
NASA Astrophysics Data System (ADS)
Falcão, António; Ho, Quan; Lopes, Pedro; Malamud, Bruce D.; Ribeiro, Rita; Jern, Mikael
2013-04-01
The large and ever-increasing amounts of multi-dimensional, time-varying and geospatial digital information from multiple sources represent a major challenge for today's analysts. We present a set of visualization techniques that can be used for the interactive analysis of geo-referenced and time sampled data sets, providing an integrated mechanism and that aids the user to collaboratively explore, present and communicate visually complex and dynamic data. Here we present these concepts in the context of a 4 hour flood scenario from Lisbon in 2010, with data that includes measures of water column (flood height) every 10 minutes at a 4.5 m x 4.5 m resolution, topography, building damage, building information, and online base maps. Techniques we use include web-based linked views, multiple charts, map layers and storytelling. We explain two of these in more detail that are not currently in common use for visualization of data: storytelling and web-based linked views. Visual storytelling is a method for providing a guided but interactive process of visualizing data, allowing more engaging data exploration through interactive web-enabled visualizations. Within storytelling, a snapshot mechanism helps the author of a story to highlight data views of particular interest and subsequently share or guide others within the data analysis process. This allows a particular person to select relevant attributes for a snapshot, such as highlighted regions for comparisons, time step, class values for colour legend, etc. and provide a snapshot of the current application state, which can then be provided as a hyperlink and recreated by someone else. Since data can be embedded within this snapshot, it is possible to interactively visualize and manipulate it. The second technique, web-based linked views, includes multiple windows which interactively respond to the user selections, so that when selecting an object and changing it one window, it will automatically update in all the other windows. These concepts can be part of a collaborative platform, where multiple people share and work together on the data, via online access, which also allows its remote usage from a mobile platform. Storytelling augments analysis and decision-making capabilities allowing to assimilate complex situations and reach informed decisions, in addition to helping the public visualize information. In our visualization scenario, developed in the context of the VA-4D project for the European Space Agency (see http://www.ca3-uninova.org/project_va4d), we make use of the GAV (GeoAnalytics Visualization) framework, a web-oriented visual analytics application based on multiple interactive views. The final visualization that we produce includes multiple interactive views, including a dynamic multi-layer map surrounded by other visualizations such as bar charts, time graphs and scatter plots. The map provides flood and building information, on top of a base city map (street maps and/or satellite imagery provided by online map services such as Google Maps, Bing Maps etc.). Damage over time for selected buildings, damage for all buildings at a chosen time period, correlation between damage and water depth can be analysed in the other views. This interactive web-based visualization that incorporates the ideas of storytelling, web-based linked views, and other visualization techniques, for a 4 hour flood event in Lisbon in 2010, can be found online at http://www.ncomva.se/flash/projects/esa/flooding/.
Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul
2016-02-01
The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.
Toward Model Building for Visual Aesthetic Perception
Lughofer, Edwin; Zeng, Xianyi
2017-01-01
Several models of visual aesthetic perception have been proposed in recent years. Such models have drawn on investigations into the neural underpinnings of visual aesthetics, utilizing neurophysiological techniques and brain imaging techniques including functional magnetic resonance imaging, magnetoencephalography, and electroencephalography. The neural mechanisms underlying the aesthetic perception of the visual arts have been explained from the perspectives of neuropsychology, brain and cognitive science, informatics, and statistics. Although corresponding models have been constructed, the majority of these models contain elements that are difficult to be simulated or quantified using simple mathematical functions. In this review, we discuss the hypotheses, conceptions, and structures of six typical models for human aesthetic appreciation in the visual domain: the neuropsychological, information processing, mirror, quartet, and two hierarchical feed-forward layered models. Additionally, the neural foundation of aesthetic perception, appreciation, or judgement for each model is summarized. The development of a unified framework for the neurobiological mechanisms underlying the aesthetic perception of visual art and the validation of this framework via mathematical simulation is an interesting challenge in neuroaesthetics research. This review aims to provide information regarding the most promising proposals for bridging the gap between visual information processing and brain activity involved in aesthetic appreciation. PMID:29270194
Rattner, Alexander S.; Guillen, Donna Post; Joshi, Alark; ...
2016-03-17
Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaViewmore » plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. Here, by providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.« less
Nonretinotopic visual processing in the brain.
Melcher, David; Morrone, Maria Concetta
2015-01-01
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Lim, Jongil; Palmer, Christopher J; Busa, Michael A; Amado, Avelino; Rosado, Luis D; Ducharme, Scott W; Simon, Darnell; Van Emmerik, Richard E A
2017-06-01
The pickup of visual information is critical for controlling movement and maintaining situational awareness in dangerous situations. Altered coordination while wearing protective equipment may impact the likelihood of injury or death. This investigation examined the consequences of load magnitude and distribution on situational awareness, segmental coordination and head gaze in several protective equipment ensembles. Twelve soldiers stepped down onto force plates and were instructed to quickly and accurately identify visual information while establishing marksmanship posture in protective equipment. Time to discriminate visual information was extended when additional pack and helmet loads were added, with the small increase in helmet load having the largest effect. Greater head-leading and in-phase trunk-head coordination were found with lighter pack loads, while trunk-leading coordination increased and head gaze dynamics were more disrupted in heavier pack loads. Additional armour load in the vest had no consequences for Time to discriminate, coordination or head dynamics. This suggests that the addition of head borne load be carefully considered when integrating new technology and that up-armouring does not necessarily have negative consequences for marksmanship performance. Practitioner Summary: Understanding the trade-space between protection and reductions in task performance continue to challenge those developing personal protective equipment. These methods provide an approach that can help optimise equipment design and loading techniques by quantifying changes in task performance and the emergent coordination dynamics that underlie that performance.
Pathview Web: user friendly pathway visualization and data integration
Pant, Gaurav; Bhavnasi, Yeshvant K.; Blanchard, Steven G.; Brouwer, Cory
2017-01-01
Abstract Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. PMID:28482075
NASA Astrophysics Data System (ADS)
Wilson, J. Adam; Walton, Léo M.; Tyler, Mitch; Williams, Justin
2012-08-01
This article describes a new method of providing feedback during a brain-computer interface movement task using a non-invasive, high-resolution electrotactile vision substitution system. We compared the accuracy and movement times during a center-out cursor movement task, and found that the task performance with tactile feedback was comparable to visual feedback for 11 participants. These subjects were able to modulate the chosen BCI EEG features during both feedback modalities, indicating that the type of feedback chosen does not matter provided that the task information is clearly conveyed through the chosen medium. In addition, we tested a blind subject with the tactile feedback system, and found that the training time, accuracy, and movement times were indistinguishable from results obtained from subjects using visual feedback. We believe that BCI systems with alternative feedback pathways should be explored, allowing individuals with severe motor disabilities and accompanying reduced visual and sensory capabilities to effectively use a BCI.
ERIC Educational Resources Information Center
Lin, Huifen
2012-01-01
For the past few decades, instructional materials enriched with multimedia elements have enjoyed increasing popularity. Multimedia-based instruction incorporating stimulating visuals, authentic audios, and interactive animated graphs of different kinds all provide additional and valuable opportunities for students to learn beyond what conventional…
Biological Implications of Artificial Illumination.
ERIC Educational Resources Information Center
Wurtman, Richard J.
1968-01-01
Environmental lighting exerts profound biologic effects on humans and other mammals, in addition to providing the visual stimulus. Light acts on the skin to stimulate the synthesis of Vitamin D. It also acts, through the eyes, to control several glands and many metabolic processes. Light, or its absence, "induces" certain biologic functions. Light…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choo, Jaegul; Kim, Hannah; Clarkson, Edward
In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less
Choo, Jaegul; Kim, Hannah; Clarkson, Edward; ...
2018-01-31
In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less
A multi-criteria approach to camera motion design for volume data animation.
Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.
Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet
Rolls, Edmund T.
2012-01-01
Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus. PMID:22723777
Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.
Rolls, Edmund T
2012-01-01
Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.
Nawroth, Christian; von Borell, Eberhard
2015-05-01
Recently, foraging strategies have been linked to the ability to use indirect visual information. More selective feeders should express a higher aversion against losses compared to non-selective feeders and should therefore be more prone to avoid empty food locations. To extend these findings, in this study, we present a series of studies investigating the use of direct and indirect visual and auditory information by an omnivorous but selective feeder-the domestic pig. Subjects had to choose between two buckets, with only one containing a reward. Before making a choice, the subjects in Experiment 1 (N = 8) received full information regarding both the baited and non-baited location, either in a visual or auditory domain. In this experiment, the subjects were able to use visual but not auditory cues to infer the location of the reward spontaneously. Additionally, four individuals learned to use auditory cues after a period of training. In Experiment 2 (N = 8), the pigs were given different amounts of visual information about the content of the buckets-lifting either both of the buckets (full information), the baited bucket (direct information), the empty bucket (indirect information) or no bucket at all (no information). The subjects as a group were able to use direct and indirect visual cues. However, over the course of the experiment, the performance dropped to chance level when indirect information was provided. A final experiment (N = 3) provided preliminary results for pigs' use of indirect auditory information to infer the location of a reward. We conclude that pigs at a very young age are able to make decisions based on indirect information in the visual domain, whereas their performance in the use of indirect auditory information warrants further investigation.
NASA Astrophysics Data System (ADS)
Garcia, Daniel D.; van de Pol, Corina; Barsky, Brian A.; Klein, Stanley A.
1999-06-01
Many current corneal topography instruments (called videokeratographs) provide an `acuity index' based on corneal smoothness to analyze expected visual acuity. However, post-refractive surgery patients often exhibit better acuity than is predicted by such indices. One reason for this is that visual acuity may not necessarily be determined by overall corneal smoothness but rather by having some part of the cornea able to focus light coherently onto the fovea. We present a new method of representing visual acuity by measuring the wavefront aberration, using principles from both ray and wave optics. For each point P on the cornea, we measure the size of the associated coherence area whose optical path length (OPL), from a reference plane to P's focus, is within a certain tolerance of the OPL for P. We measured the topographies and vision of 62 eyes of patients who had undergone the corneal refractive surgery procedures of photorefractive keratectomy (PRK) and photorefractive astigmatic keratectomy (PARK). In addition to high contrast visual acuity, our vision tests included low contrast and low luminance to test the contribution of the PRK transition zone. We found our metric for visual acuity to be better than all other metrics at predicting the acuity of low contrast and low luminance. However, high contrast visual acuity was poorly predicted by all of the indices we studied, including our own. The indices provided by current videokeratographs sometimes fail for corneas whose shape differs from simple ellipsoidal models. This is the case with post-PRK and post-PARK refractive surgery patients. Our alternative representation that displays the coherence area of the wavefront has considerable advantages, and promises to be a better predictor of low contrast and low luminance visual acuity than current shape measures.
Stöckel, Tino; Fries, Udo
2013-01-01
We examined the influence of visual context information on skilled motor behaviour and motor adaptation in basketball. The rules of basketball in Europe have recently changed, such that that the distance for three-point shots increased from 6.25 m to 6.75 m. As such, we tested the extent to which basketball experts can adapt to the longer distance when a) only the unfamiliar, new three-point line was provided as floor markings (NL group), or b) the familiar, old three-point line was provided in addition to the new floor markings (OL group). In the present study 20 expert basketball players performed 40 three-point shots from 6.25 m and 40 shots from 6.75 m. We assessed the percentage of hits and analysed the landing position of the ball. Results showed better adaptation of throwing performance to the longer distance when the old three-point line was provided as a visual landmark, compared to when only the new three-point line was provided. We hypothesise that the three-point line delivered relevant information needed to successfully adapt to the greater distance in the OL group, whereas it disturbed performance and ability to adapt in the NL group. The importance of visual landmarks on motor adaptation in basketball throwing is discussed relative to the influence of other information sources (i.e. angle of elevation relative to the basket) and sport practice.
NASA Astrophysics Data System (ADS)
Hansen, Christian; Schlichting, Stefan; Zidowitz, Stephan; Köhn, Alexander; Hindennach, Milo; Kleemann, Markus; Peitgen, Heinz-Otto
2008-03-01
Tumor resections from the liver are complex surgical interventions. With recent planning software, risk analyses based on individual liver anatomy can be carried out preoperatively. However, additional tumors within the liver are frequently detected during oncological interventions using intraoperative ultrasound. These tumors are not visible in preoperative data and their existence may require changes to the resection strategy. We propose a novel method that allows an intraoperative risk analysis adaptation by merging newly detected tumors with a preoperative risk analysis. To determine the exact positions and sizes of these tumors we make use of a navigated ultrasound-system. A fast communication protocol enables our application to exchange crucial data with this navigation system during an intervention. A further motivation for our work is to improve the visual presentation of a moving ultrasound plane within a complex 3D planning model including vascular systems, tumors, and organ surfaces. In case the ultrasound plane is located inside the liver, occlusion of the ultrasound plane by the planning model is an inevitable problem for the applied visualization technique. Our system allows the surgeon to focus on the ultrasound image while perceiving context-relevant planning information. To improve orientation ability and distance perception, we include additional depth cues by applying new illustrative visualization algorithms. Preliminary evaluations confirm that in case of intraoperatively detected tumors a risk analysis adaptation is beneficial for precise liver surgery. Our new GPU-based visualization approach provides the surgeon with a simultaneous visualization of planning models and navigated 2D ultrasound data while minimizing occlusion problems.
Chen, Chih-Yang; Tian, Xiaoguang; Idrees, Saad; Münch, Thomas A.
2017-01-01
Microsaccades occur during gaze fixation to correct for miniscule foveal motor errors. The mechanisms governing such fine oculomotor control are still not fully understood. In this study, we explored microsaccade control by analyzing the impacts of transient visual stimuli on these movements’ kinematics. We found that such kinematics can be altered in systematic ways depending on the timing and spatial geometry of visual transients relative to the movement goals. In two male rhesus macaques, we presented peripheral or foveal visual transients during an otherwise stable period of fixation. Such transients resulted in well-known reductions in microsaccade frequency, and our goal was to investigate whether microsaccade kinematics would additionally be altered. We found that both microsaccade timing and amplitude were modulated by the visual transients, and in predictable manners by these transients’ timing and geometry. Interestingly, modulations in the peak velocity of the same movements were not proportional to the observed amplitude modulations, suggesting a violation of the well-known “main sequence” relationship between microsaccade amplitude and peak velocity. We hypothesize that visual stimulation during movement preparation affects not only the saccadic “Go” system driving eye movements but also a “Pause” system inhibiting them. If the Pause system happens to be already turned off despite the new visual input, movement kinematics can be altered by the readout of additional visually evoked spikes in the Go system coding for the flash location. Our results demonstrate precise control over individual microscopic saccades and provide testable hypotheses for mechanisms of saccade control in general. NEW & NOTEWORTHY Microsaccadic eye movements play an important role in several aspects of visual perception and cognition. However, the mechanisms for microsaccade control are still not fully understood. We found that microsaccade kinematics can be altered in a systematic manner by visual transients, revealing a previously unappreciated and exquisite level of control by the oculomotor system of even the smallest saccades. Our results suggest precise temporal interaction between visual, motor, and inhibitory signals in microsaccade control. PMID:28202573
Stadler, Jennifer G; Donlon, Kipp; Siewert, Jordan D; Franken, Tessa; Lewis, Nathaniel E
2016-06-01
The digitization of a patient's health record has profoundly impacted medicine and healthcare. The compilation and accessibility of medical history has provided clinicians an unprecedented, holistic account of a patient's conditions, procedures, medications, family history, and social situation. In addition to the bedside benefits, this level of information has opened the door for population-level monitoring and research, the results of which can be used to guide initiatives that are aimed at improving quality of care. Cerner Corporation partners with health systems to help guide population management and quality improvement projects. With such an enormous and diverse client base-varying in geography, size, organizational structure, and analytic needs-discerning meaning in the data and how they fit with that particular hospital's goals is a slow, difficult task that requires clinical, statistical, and technical literacy. This article describes the development of dashboards for efficient data visualization at the healthcare facility level. Focusing on two areas with broad clinical importance, sepsis patient outcomes and 30-day hospital readmissions, dashboards were developed with the goal of aggregating data and providing meaningful summary statistics, highlighting critical performance metrics, and providing easily digestible visuals that can be understood by a wide range of personnel with varying levels of skill and areas of expertise. These internal-use dashboards have allowed associates in multiple roles to perform a quick and thorough assessment on a hospital of interest by providing the data to answer necessary questions and to identify important trends or opportunities. This automation of a previously manual process has greatly increased efficiency, saving hours of work time per hospital analyzed. Additionally, the dashboards have standardized the analysis process, ensuring use of the same metrics and processes so that overall themes can be compared across hospitals and health systems.
Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph
2013-09-01
The importance of perceptual-cognitive expertise in sport has been repeatedly demonstrated. In this study we examined the role of different sources of visual information (i.e., foveal versus peripheral) in anticipating volleyball attack positions. Expert (n = 11), advanced (n = 13) and novice (n = 16) players completed an anticipation task that involved predicting the location of volleyball attacks. Video clips of volleyball attacks (n = 72) were spatially and temporally occluded to provide varying amounts of information to the participant. In addition, participants viewed the attacks under three visual conditions: full vision, foveal vision only, and peripheral vision only. Analysis of variance revealed significant between group differences in prediction accuracy with higher skilled players performing better than lower skilled players. Additionally, we found significant differences between temporal and spatial occlusion conditions. Both of those factors interacted separately, but not combined with expertise. Importantly, for experts the sum of both fields of vision was superior to either source in isolation. Our results suggest different sources of visual information work collectively to facilitate expert anticipation in time-constrained sports and reinforce the complexity of expert perception.
Heinke, Florian; Bittrich, Sebastian; Kaiser, Florian; Labudde, Dirk
2016-01-01
To understand the molecular function of biopolymers, studying their structural characteristics is of central importance. Graphics programs are often utilized to conceive these properties, but with the increasing number of available structures in databases or structure models produced by automated modeling frameworks this process requires assistance from tools that allow automated structure visualization. In this paper a web server and its underlying method for generating graphical sequence representations of molecular structures is presented. The method, called SequenceCEROSENE (color encoding of residues obtained by spatial neighborhood embedding), retrieves the sequence of each amino acid or nucleotide chain in a given structure and produces a color coding for each residue based on three-dimensional structure information. From this, color-highlighted sequences are obtained, where residue coloring represent three-dimensional residue locations in the structure. This color encoding thus provides a one-dimensional representation, from which spatial interactions, proximity and relations between residues or entire chains can be deduced quickly and solely from color similarity. Furthermore, additional heteroatoms and chemical compounds bound to the structure, like ligands or coenzymes, are processed and reported as well. To provide free access to SequenceCEROSENE, a web server has been implemented that allows generating color codings for structures deposited in the Protein Data Bank or structure models uploaded by the user. Besides retrieving visualizations in popular graphic formats, underlying raw data can be downloaded as well. In addition, the server provides user interactivity with generated visualizations and the three-dimensional structure in question. Color encoded sequences generated by SequenceCEROSENE can aid to quickly perceive the general characteristics of a structure of interest (or entire sets of complexes), thus supporting the researcher in the initial phase of structure-based studies. In this respect, the web server can be a valuable tool, as users are allowed to process multiple structures, quickly switch between results, and interact with generated visualizations in an intuitive manner. The SequenceCEROSENE web server is available at https://biosciences.hs-mittweida.de/seqcerosene.
Learning semantic and visual similarity for endomicroscopy video retrieval.
Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2012-06-01
Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them. In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.
PRROC: computing and visualizing precision-recall and receiver operating characteristic curves in R.
Grau, Jan; Grosse, Ivo; Keilwagen, Jens
2015-08-01
Precision-recall (PR) and receiver operating characteristic (ROC) curves are valuable measures of classifier performance. Here, we present the R-package PRROC, which allows for computing and visualizing both PR and ROC curves. In contrast to available R-packages, PRROC allows for computing PR and ROC curves and areas under these curves for soft-labeled data using a continuous interpolation between the points of PR curves. In addition, PRROC provides a generic plot function for generating publication-quality graphics of PR and ROC curves. © The Author 2015. Published by Oxford University Press.
MR imaging of the fetal musculoskeletal system.
Nemec, Stefan Franz; Nemec, Ursula; Brugger, Peter C; Bettelheim, Dieter; Rotmensch, Siegfried; Graham, John M; Rimoin, David L; Prayer, Daniela
2012-03-01
Magnetic resonance imaging (MRI) appears to be increasingly used, in addition to standard ultrasonography for the diagnosis of abnormalities in utero. Previous studies have recently drawn attention to the technical refinement of MRI to visualize the fetal bones and muscles. Beyond commonly used T2-weighted MRI, echoplanar, thick-slab T2-weighted and dynamic sequences, and three-dimensional MRI techniques, are about to provide new imaging insights into the normal and the pathological musculoskeletal system of the fetus. This review emphasizes the potential significance of MRI in the visualization of the fetal musculoskeletal system. © 2012 John Wiley & Sons, Ltd.
Image Feature Types and Their Predictions of Aesthetic Preference and Naturalness
Ibarra, Frank F.; Kardan, Omid; Hunter, MaryCarol R.; Kotabe, Hiroki P.; Meyer, Francisco A. C.; Berman, Marc G.
2017-01-01
Previous research has investigated ways to quantify visual information of a scene in terms of a visual processing hierarchy, i.e., making sense of visual environment by segmentation and integration of elementary sensory input. Guided by this research, studies have developed categories for low-level visual features (e.g., edges, colors), high-level visual features (scene-level entities that convey semantic information such as objects), and how models of those features predict aesthetic preference and naturalness. For example, in Kardan et al. (2015a), 52 participants provided aesthetic preference and naturalness ratings, which are used in the current study, for 307 images of mixed natural and urban content. Kardan et al. (2015a) then developed a model using low-level features to predict aesthetic preference and naturalness and could do so with high accuracy. What has yet to be explored is the ability of higher-level visual features (e.g., horizon line position relative to viewer, geometry of building distribution relative to visual access) to predict aesthetic preference and naturalness of scenes, and whether higher-level features mediate some of the association between the low-level features and aesthetic preference or naturalness. In this study we investigated these relationships and found that low- and high- level features explain 68.4% of the variance in aesthetic preference ratings and 88.7% of the variance in naturalness ratings. Additionally, several high-level features mediated the relationship between the low-level visual features and aaesthetic preference. In a multiple mediation analysis, the high-level feature mediators accounted for over 50% of the variance in predicting aesthetic preference. These results show that high-level visual features play a prominent role predicting aesthetic preference, but do not completely eliminate the predictive power of the low-level visual features. These strong predictors provide powerful insights for future research relating to landscape and urban design with the aim of maximizing subjective well-being, which could lead to improved health outcomes on a larger scale. PMID:28503158
Catch the A-Train from the NASA GIBS/Worldview Platform
NASA Astrophysics Data System (ADS)
Schmaltz, J. E.; Alarcon, C.; Baynes, K.; Boller, R. A.; Cechini, M. F.; De Cesare, C.; De Luca, A. P.; Gunnoe, T.; King, B. A.; King, J.; Pressley, N. N.; Roberts, J. T.; Rodriguez, J.; Thompson, C. K.; Wong, M. M.
2016-12-01
The satellites and instruments of the Afternoon Train are providing an unprecedented combination of nearly simultaneous measurements. One of the challenges for researchers and applications users is to sift through these combinations to find particular sets of data that correspond to their interests. Using visualization of the data is one way to explore these combinations. NASA's Worldview tool is designed to do just that - to interactively browse full-resolution satellite imagery. Worldview (https://worldview.earthdata.nasa.gov/) is web-based and developed using open libraries and standards (OpenLayers, JavaScript, CSS, HTML) for cross-platform compatibility. It addresses growing user demands for access to full-resolution imagery by providing a responsive, interactive interface with global coverage and no artificial boundaries. In addition to science data imagery, Worldview provides ancillary datasets such as coastlines and borders, socio-economic layers, and satellite orbit tracks. Worldview interacts with the Earthdata Search Client to provide download of the data files associated with the imagery being viewed. The imagery used by Worldview is provided NASA's Global Imagery Browse Services (GIBS - https://earthdata.nasa.gov/gibs) which provide highly responsive, highly scalable imagery services. Requests are made via the OGC Web Map Tile Service (WMTS) standard. In addition to Worldview, other clients can be developed using a variety of web-based libraries, desktop and mobile app libraries, and GDAL script-based access. GIBS currently includes more than 106 science data sets from seven instruments aboard three of the A-Train satellites and new data sets are being added as part of the President's Big Earth Data Initiative (BEDI). Efforts are underway to include new imagery types, such as vectors and curtains, into Worldview/GIBS which will be used to visualize additional A-Train science parameters.
Explanatory and illustrative visualization of special and general relativity.
Weiskopf, Daniel; Borchers, Marc; Ertl, Thomas; Falk, Martin; Fechtig, Oliver; Frank, Regine; Grave, Frank; King, Andreas; Kraus, Ute; Müller, Thomas; Nollert, Hans-Peter; Rica Mendez, Isabel; Ruder, Hanns; Schafhitzel, Tobias; Schär, Sonja; Zahn, Corvin; Zatloukal, Michael
2006-01-01
This paper describes methods for explanatory and illustrative visualizations used to communicate aspects of Einstein's theories of special and general relativity, their geometric structure, and of the related fields of cosmology and astrophysics. Our illustrations target a general audience of laypersons interested in relativity. We discuss visualization strategies, motivated by physics education and the didactics of mathematics, and describe what kind of visualization methods have proven to be useful for different types of media, such as still images in popular science magazines, film contributions to TV shows, oral presentations, or interactive museum installations. Our primary approach is to adopt an egocentric point of view: The recipients of a visualization participate in a visually enriched thought experiment that allows them to experience or explore a relativistic scenario. In addition, we often combine egocentric visualizations with more abstract illustrations based on an outside view in order to provide several presentations of the same phenomenon. Although our visualization tools often build upon existing methods and implementations, the underlying techniques have been improved by several novel technical contributions like image-based special relativistic rendering on GPUs, special relativistic 4D ray tracing for accelerating scene objects, an extension of general relativistic ray tracing to manifolds described by multiple charts, GPU-based interactive visualization of gravitational light deflection, as well as planetary terrain rendering. The usefulness and effectiveness of our visualizations are demonstrated by reporting on experiences with, and feedback from, recipients of visualizations and collaborators.
Perceptual learning in children with visual impairment improves near visual acuity.
Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N
2013-09-17
This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P < 0.001). Only the children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).
Ahn, Seong Joon; Ahn, Jeeyun; Woo, Se Joon; Park, Kyu Hyung
2014-01-01
To compare the postoperative photoreceptor status and visual outcome after epiretinal membrane removal with or without additional internal limiting membrane (ILM) peeling. Medical records of 40 eyes from 37 patients undergoing epiretinal membrane removal with residual ILM peeling (additional ILM peeling group) and 69 eyes from 65 patients undergoing epiretinal membrane removal without additional ILM peeling (no additional peeling group) were reviewed. The length of defects in cone outer segment tips, inner segment/outer segment junction, and external limiting membrane line were measured using spectral domain optical coherence tomography images of the fovea before and at 1, 3, 6, and 12 months after the surgery. Cone outer segment tips and inner segment/outer segment junction line defects were most severe at postoperative 1 month and gradually restored at 12 months postoperatively. The cone outer segment tips line defect in the additional ILM peeling group was significantly greater than that in the no additional peeling group at postoperative 1 month (P = 0.006), and best-corrected visual acuity was significantly worse in the former group at the same month (P = 0.001). There was no significant difference in the defect size and best-corrected visual acuity at subsequent visits and recurrence rates between the two groups. Patients who received epiretinal membrane surgery without additional ILM peeling showed better visual and anatomical outcome than those with additional ILM peeling at postoperative 1 month. However, surgical outcomes were comparable between the two groups, thereafter. In terms of visual outcome and photoreceptor integrity, additional ILM peeling may not be an essential procedure.
The visual communication of risk.
Lipkus, I M; Hollands, J G
1999-01-01
This paper 1) provides reasons why graphics should be effective aids to communicate risk; 2) reviews the use of visuals, especially graphical displays, to communicate risk; 3) discusses issues to consider when designing graphs to communicate risk; and 4) provides suggestions for future research. Key articles and materials were obtained from MEDLINE(R) and PsychInfo(R) databases, from reference article citations, and from discussion with experts in risk communication. Research has been devoted primarily to communicating risk magnitudes. Among the various graphical displays, the risk ladder appears to be a promising tool for communicating absolute and relative risks. Preliminary evidence suggests that people understand risk information presented in histograms and pie charts. Areas that need further attention include 1) applying theoretical models to the visual communication of risk, 2) testing which graphical displays can be applied best to different risk communication tasks (e.g., which graphs best convey absolute or relative risks), 3) communicating risk uncertainty, and 4) testing whether the lay public's perceptions and understanding of risk varies by graphical format and whether the addition of graphical displays improves comprehension substantially beyond numerical or narrative translations of risk and, if so, by how much. There is a need to ascertain the extent to which graphics and other visuals enhance the public's understanding of disease risk to facilitate decision-making and behavioral change processes. Nine suggestions are provided to help achieve these ends.
Honeybees in a virtual reality environment learn unique combinations of colour and shape.
Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A
2017-10-01
Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.
Capturing planar shapes by approximating their outlines
NASA Astrophysics Data System (ADS)
Sarfraz, M.; Riyazuddin, M.; Baig, M. H.
2006-05-01
A non-deterministic evolutionary approach for approximating the outlines of planar shapes has been developed. Non-uniform Rational B-splines (NURBS) have been utilized as an underlying approximation curve scheme. Simulated Annealing heuristic is used as an evolutionary methodology. In addition to independent studies of the optimization of weight and knot parameters of the NURBS, a separate scheme has also been developed for the optimization of weights and knots simultaneously. The optimized NURBS models have been fitted over the contour data of the planar shapes for the ultimate and automatic output. The output results are visually pleasing with respect to the threshold provided by the user. A web-based system has also been developed for the effective and worldwide utilization. The objective of this system is to provide the facility to visualize the output to the whole world through internet by providing the freedom to the user for various desired input parameters setting in the algorithm designed.
Visualizing Spatially Varying Distribution Data
NASA Technical Reports Server (NTRS)
Kao, David; Luo, Alison; Dungan, Jennifer L.; Pang, Alex; Biegel, Bryan A. (Technical Monitor)
2002-01-01
Box plot is a compact representation that encodes the minimum, maximum, mean, median, and quarters information of a distribution. In practice, a single box plot is drawn for each variable of interest. With the advent of more accessible computing power, we are now facing the problem of visual icing data where there is a distribution at each 2D spatial location. Simply extending the box plot technique to distributions over 2D domain is not straightforward. One challenge is reducing the visual clutter if a box plot is drawn over each grid location in the 2D domain. This paper presents and discusses two general approaches, using parametric statistics and shape descriptors, to present 2D distribution data sets. Both approaches provide additional insights compared to the traditional box plot technique
Flowfield visualization for SSME hot gas manifold
NASA Technical Reports Server (NTRS)
Roger, Robert P.
1988-01-01
The objective of this research, as defined by NASA-Marshall Space Flight Center, was two-fold: (1) to numerically simulate viscous subsonic flow in a proposed elliptical two-duct version of the fuel side Hot Gas Manifold (HGM) for the Space Shuttle Main Engine (SSME), and (2) to provide analytical support for SSME related numerical computational experiments, being performed by the Computational Fluid Dynamics staff in the Aerophysics Division of the Structures and Dynamics Laboratory at NASA-MSFC. Numerical results of HGM were calculations to complement both water flow visualization experiments and air flow visualization experiments and air experiments in two-duct geometries performed at NASA-MSFC and Rocketdyne. In addition, code modification and improvement efforts were to strengthen the CFD capabilities of NASA-MSFC for producing reliable predictions of flow environments within the SSME.
Eye-movements and Voice as Interface Modalities to Computer Systems
NASA Astrophysics Data System (ADS)
Farid, Mohsen M.; Murtagh, Fionn D.
2003-03-01
We investigate the visual and vocal modalities of interaction with computer systems. We focus our attention on the integration of visual and vocal interface as possible replacement and/or additional modalities to enhance human-computer interaction. We present a new framework for employing eye gaze as a modality of interface. While voice commands, as means of interaction with computers, have been around for a number of years, integration of both the vocal interface and the visual interface, in terms of detecting user's eye movements through an eye-tracking device, is novel and promises to open the horizons for new applications where a hand-mouse interface provides little or no apparent support to the task to be accomplished. We present an array of applications to illustrate the new framework and eye-voice integration.
Visualizing Motion Patterns in Acupuncture Manipulation.
Lee, Ye-Seul; Jung, Won-Mo; Lee, In-Seon; Lee, Hyangsook; Park, Hi-Joon; Chae, Younbyoung
2016-07-16
Acupuncture manipulation varies widely among practitioners in clinical settings, and it is difficult to teach novice students how to perform acupuncture manipulation techniques skillfully. The Acupuncture Manipulation Education System (AMES) is an open source software system designed to enhance acupuncture manipulation skills using visual feedback. Using a phantom acupoint and motion sensor, our method for acupuncture manipulation training provides visual feedback regarding the actual movement of the student's acupuncture manipulation in addition to the optimal or intended movement, regardless of whether the manipulation skill is lifting, thrusting, or rotating. Our results show that students could enhance their manipulation skills by training using this method. This video shows the process of manufacturing phantom acupoints and discusses several issues that may require the attention of individuals interested in creating phantom acupoints or operating this system.
Distributed visualization framework architecture
NASA Astrophysics Data System (ADS)
Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger
2010-01-01
An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this interface. One of the main features is an interactive shader designer. This allows rapid prototyping of new visualization renderings that are shader-based and greatly accelerates the development and debug cycle.
The four-meter confrontation visual field test.
Kodsi, S R; Younge, B R
1992-01-01
The 4-m confrontation visual field test has been successfully used at the Mayo Clinic for many years in addition to the standard 0.5-m confrontation visual field test. The 4-m confrontation visual field test is a test of macular function and can identify small central or paracentral scotomas that the examiner may not find when the patient is tested only at 0.5 m. Also, macular sparing in homonymous hemianopias and quadrantanopias may be identified with the 4-m confrontation visual field test. We recommend use of this confrontation visual field test, in addition to the standard 0.5-m confrontation visual field test, on appropriately selected patients to obtain the most information possible by confrontation visual field tests. PMID:1494829
The four-meter confrontation visual field test.
Kodsi, S R; Younge, B R
1992-01-01
The 4-m confrontation visual field test has been successfully used at the Mayo Clinic for many years in addition to the standard 0.5-m confrontation visual field test. The 4-m confrontation visual field test is a test of macular function and can identify small central or paracentral scotomas that the examiner may not find when the patient is tested only at 0.5 m. Also, macular sparing in homonymous hemianopias and quadrantanopias may be identified with the 4-m confrontation visual field test. We recommend use of this confrontation visual field test, in addition to the standard 0.5-m confrontation visual field test, on appropriately selected patients to obtain the most information possible by confrontation visual field tests.
Giraud, Stéphanie; Brock, Anke M; Macé, Marc J-M; Jouffrais, Christophe
2017-01-01
Special education teachers for visually impaired students rely on tools such as raised-line maps (RLMs) to teach spatial knowledge. These tools do not fully and adequately meet the needs of the teachers because they are long to produce, expensive, and not versatile enough to provide rapid updating of the content. For instance, the same RLM can barely be used during different lessons. In addition, those maps do not provide any interactivity, which reduces students' autonomy. With the emergence of 3D printing and low-cost microcontrollers, it is now easy to design affordable interactive small-scale models (SSMs) which are adapted to the needs of special education teachers. However, no study has previously been conducted to evaluate non-visual learning using interactive SSMs. In collaboration with a specialized teacher, we designed a SSM and a RLM representing the evolution of the geography and history of a fictitious kingdom. The two conditions were compared in a study with 24 visually impaired students regarding the memorization of the spatial layout and historical contents. The study showed that the interactive SSM improved both space and text memorization as compared to the RLM with braille legend. In conclusion, we argue that affordable home-made interactive small scale models can improve learning for visually impaired students. Interestingly, they are adaptable to any teaching situation including students with specific needs.
An Approach to Providing a User Interface for Military Computer-Aided-Instruction in 1980.
ERIC Educational Resources Information Center
Gallenson, Louis
A recent needs study determined that most of the terminal requirements for military computer assisted instruction (CAI) applications can be satisfied with mainstream commercial terminals. Additional development, however, is likely to be required to satisfy two of the capabilities (limited graphics and prerecorded visuals). The expected…
Hospital Information Systems for Clinical and Research Applications: A Survey of the Issues
1983-06-01
potentials for auditory and visual nervous system activity) is being used intensively in the field of neurophysiology (27, 108, 109). In addition, the high...user group: this provides a community of enlightened users who can share ideas and experiences. (NOTE: NCHSR support ended January 1, 1983.) .Masor
Effects of Numerical Surface Form in Arithmetic Word Problems
ERIC Educational Resources Information Center
Orrantia, Josetxu; Múñez, David; San Romualdo, Sara; Verschaffel, Lieven
2015-01-01
Adults' simple arithmetic performance is more efficient when operands are presented in Arabic digit (3 + 5) than in number word (three + five) formats. An explanation provided is that visual familiarity with digits is higher respect to number words. However, most studies have been limited to single-digit addition and multiplication problems. In…
Using the Fine Arts to Teach Early Childhood Essential Elements.
ERIC Educational Resources Information Center
Education Service Center Region 11, Ft. Worth, TX.
This extensive curriculum guide provides teachers of young children ages three to six with some specific lesson plans using the fine arts--music, drama, creative movement, and visual arts--to teach the "essential elements" in early childhood education. In addition, systematic, thorough evaluations of a variety of materials, kits, resource and…
Dorsal raphe nucleus projecting retinal ganglion cells: Why Y cells?
Pickard, Gary E.; So, Kwok-Fai; Pu, Mingliang
2015-01-01
Retinal ganglion Y (alpha) cells are found in retinas ranging from frogs to mice to primates. The highly conserved nature of the large, fast conducting retinal Y cell is a testament to its fundamental task, although precisely what this task is remained ill-defined. The recent discovery that Y-alpha retinal ganglion cells send axon collaterals to the serotonergic dorsal raphe nucleus (DRN) in addition to the lateral geniculate nucleus (LGN), medial interlaminar nucleus (MIN), pretectum and the superior colliculus (SC) has offered new insights into the important survival tasks performed by these cells with highly branched axons. We propose that in addition to its role in visual perception, the Y-alpha retinal ganglion cell provides concurrent signals via axon collaterals to the DRN, the major source of serotonergic afferents to the forebrain, to dramatically inhibit 5-HT activity during orientation or alerting/escape responses, which dis-facilitates ongoing tonic motor activity while dis-inhibiting sensory information processing throughout the visual system. The new data provide a fresh view of these evolutionarily old retinal ganglion cells. PMID:26363667
Video-mediated communication to support distant family connectedness.
Furukawa, Ryoko; Driessnack, Martha
2013-02-01
It can be difficult to maintain family connections with geographically distant members. However, advances in computer-human interaction (CHI) systems, including video-mediated communication (VMC) are emerging. While VMC does not completely substitute for physical face-to-face communication, it appears to provide a sense of virtual copresence through the addition of visual and contextual cues to verbal communication between family members. The purpose of this study was to explore current patterns of VMC use, experiences, and family functioning among self-identified VMC users separated geographically from their families. A total of 341 participants (ages 18 to above 70) completed an online survey and Family APGAR. Ninty-six percent of the participants reported that VMC was the most common communication method used and 60% used VMC at least once/week. The most common reason cited for using VMC over other methods of communication was the addition of visual cues. A significant difference between the Family APGAR scores and the number of positive comments about VMC experience was also found. This exploratory study provides insight into the acceptance of VMC and its usefulness in maintaining connections with distant family members.
LaZerte, Stefanie E; Reudink, Matthew W; Otter, Ken A; Kusack, Jackson; Bailey, Jacob M; Woolverton, Austin; Paetkau, Mark; de Jong, Adriaan; Hill, David J
2017-10-01
Radio frequency identification (RFID) provides a simple and inexpensive approach for examining the movements of tagged animals, which can provide information on species behavior and ecology, such as habitat/resource use and social interactions. In addition, tracking animal movements is appealing to naturalists, citizen scientists, and the general public and thus represents a tool for public engagement in science and science education. Although a useful tool, the large amount of data collected using RFID may quickly become overwhelming. Here, we present an R package (feedr) we have developed for loading, transforming, and visualizing time-stamped, georeferenced data, such as RFID data collected from static logger stations. Using our package, data can be transformed from raw RFID data to visits, presence (regular detections by a logger over time), movements between loggers, displacements, and activity patterns. In addition, we provide several conversion functions to allow users to format data for use in functions from other complementary R packages. Data can also be visualized through static or interactive maps or as animations over time. To increase accessibility, data can be transformed and visualized either through R directly, or through the companion site: http://animalnexus.ca, an online, user-friendly, R-based Shiny Web application. This system can be used by professional and citizen scientists alike to view and study animal movements. We have designed this package to be flexible and to be able to handle data collected from other stationary sources (e.g., hair traps, static very high frequency (VHF) telemetry loggers, observations of marked individuals in colonies or staging sites), and we hope this framework will become a meeting point for science, education, and community awareness of the movements of animals. We aim to inspire citizen engagement while simultaneously enabling robust scientific analysis.
WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets
NASA Astrophysics Data System (ADS)
Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.
2010-12-01
WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface of the web application. These features are all in addition to a full range of essential visualization functions including 3-D camera and object orientation, position manipulation, time-stepping control, and custom color/alpha mapping.
Immersion ultrasonography: simultaneous A-scan and B-scan.
Coleman, D J; Dallow, R L; Smith, M E
1979-01-01
In eyes with opaque media, ophthalmic ultrasound provides a unique source of information that can dramatically affect the course of patient management. In addition, when an ocular abnormality can be visualized, ultrasonography provides information that supplements and complements other diagnostic testing. It provides documentation and differentiation of abnormal states, such as vitreous hemorrhage and intraocular tumor, as well as differentiation of orbital tumors from inflammatory causes of exophthalmos. Additional capabilities of ultrasound are biometric determinations for calculation of intraocular lens implant powers and drug-effectiveness studies. Maximal information is derived from ultrasonography when A-scan and B-scan techniques are employed simultaneously. Flexibility of electronics, variable-frequency transducers, and the use of several different manual scanning patterns aid in detection and interpretation of results. The immersion system of ultrasonography provides these features optimally.
Urosevich, Thomas G; Boscarino, Joseph J; Hoffman, Stuart N; Kirchner, H Lester; Figley, Charles R; Adams, Richard E; Withey, Carrie A; Boscarino, Joseph A
2018-05-24
Traumatic brain injury (TBI) and post-traumatic stress disorder are considered the signature injuries of the Iraq and Afghanistan conflicts. With the extensive use of improvised explosive devices by the enemy, the concussive effects from blast have a greater potential to cause mild TBI (mTBI) in military Service Members. These mTBI can be associated with other physical and psychological health problems, including mTBI-induced visual processing and eye movement dysfunctions. Our study assessed if any visual dysfunctions existed in those surveyed in non-Veterans Administration (VA) facilities who had suffered mTBI (concussive effect), in addition to the presence of concussion-related co-morbidities. As part of a larger study involving veterans from different service eras, we surveyed 235 Veterans who had served during the Iraq and/or Afghanistan conflict era. Data for the study were collected using diagnostic telephone interviews of these veterans who were outpatients of the Geisinger Health System. We assess visual dysfunction in this sample and compare visual dysfunctions of those who had suffered a mTBI (concussive effect), as well as co-morbidities, with those in the cohort who had not suffered concussion effects. Of those veterans who experienced visual dysfunctions, our results reflected that the visual symptoms were significant for concussion with the subjects surveyed, even though all had experienced a mTBI event greater than five years ago. Although we did find an association with concussion and visual symptoms, the association for concussion was strongest with the finding of greater than or equal to three current TBI symptoms, therefore we found this to be the best predictor of previous concussion among the veterans. Veterans from the Iraq/Afghanistan era who had suffered concussive blast effects (mTBI) can present with covert visual dysfunction as well as additional physical and psychological health problems. The primary eye care providers, especially those in a non-military/VA facility, who encounter these veterans need to be aware of the predictors of mTBI, with the aim of uncovering visual dysfunctions and other associated co-morbidities.
Visual cues and listening effort: individual variability.
Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y
2011-10-01
To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.
Brown, Jessica A; Hux, Karen; Knollman-Porter, Kelly; Wallace, Sarah E
2016-01-01
Concomitant visual and cognitive impairments following traumatic brain injuries (TBIs) may be problematic when the visual modality serves as a primary source for receiving information. Further difficulties comprehending visual information may occur when interpretation requires processing inferential rather than explicit content. The purpose of this study was to compare the accuracy with which people with and without severe TBI interpreted information in contextually rich drawings. Fifteen adults with and 15 adults without severe TBI. Repeated-measures between-groups design. Participants were asked to match images to sentences that either conveyed explicit (ie, main action or background) or inferential (ie, physical or mental inference) information. The researchers compared accuracy between participant groups and among stimulus conditions. Participants with TBI demonstrated significantly poorer accuracy than participants without TBI extracting information from images. In addition, participants with TBI demonstrated significantly higher response accuracy when interpreting explicit rather than inferential information; however, no significant difference emerged between sentences referencing main action versus background information or sentences providing physical versus mental inference information for this participant group. Difficulties gaining information from visual environmental cues may arise for people with TBI given their difficulties interpreting inferential content presented through the visual modality.
TVA-based assessment of visual attentional functions in developmental dyslexia
Bogon, Johanna; Finke, Kathrin; Stenneken, Prisca
2014-01-01
There is an ongoing debate whether an impairment of visual attentional functions constitutes an additional or even an isolated deficit of developmental dyslexia (DD). Especially performance in tasks that require the processing of multiple visual elements in parallel has been reported to be impaired in DD. We review studies that used parameter-based assessment for identifying and quantifying impaired aspect(s) of visual attention that underlie this multi-element processing deficit in DD. These studies used the mathematical framework provided by the “theory of visual attention” (Bundesen, 1990) to derive quantitative measures of general attentional resources and attentional weighting aspects on the basis of behavioral performance in whole- and partial-report tasks. Based on parameter estimates in children and adults with DD, the reviewed studies support a slowed perceptual processing speed as an underlying primary deficit in DD. Moreover, a reduction in visual short term memory storage capacity seems to present a modulating component, contributing to difficulties in written language processing. Furthermore, comparing the spatial distributions of attentional weights in children and adults suggests that having limited reading and writing skills might impair the development of a slight leftward bias, that is typical for unimpaired adult readers. PMID:25360129
Interactive visualization of vegetation dynamics
Reed, B.C.; Swets, D.; Bard, L.; Brown, J.; Rowland, James
2001-01-01
Satellite imagery provides a mechanism for observing seasonal dynamics of the landscape that have implications for near real-time monitoring of agriculture, forest, and range resources. This study illustrates a technique for visualizing timely information on key events during the growing season (e.g., onset, peak, duration, and end of growing season), as well as the status of the current growing season with respect to the recent historical average. Using time-series analysis of normalized difference vegetation index (NDVI) data from the advanced very high resolution radiometer (AVHRR) satellite sensor, seasonal dynamics can be derived. We have developed a set of Java-based visualization and analysis tools to make comparisons between the seasonal dynamics of the current year with those from the past twelve years. In addition, the visualization tools allow the user to query underlying databases such as land cover or administrative boundaries to analyze the seasonal dynamics of areas of their own interest. The Java-based tools (data exploration and visualization analysis or DEVA) use a Web-based client-server model for processing the data. The resulting visualization and analysis, available via the Internet, is of value to those responsible for land management decisions, resource allocation, and at-risk population targeting.
Effects of visual motion consistent or inconsistent with gravity on postural sway.
Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo
2017-07-01
Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.
Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P
2018-01-01
Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.
The role of visual and direct force feedback in robotics-assisted mitral valve annuloplasty.
Currie, Maria E; Talasaz, Ali; Rayman, Reiza; Chu, Michael W A; Kiaii, Bob; Peters, Terry; Trejos, Ana Luisa; Patel, Rajni
2017-09-01
The objective of this work was to determine the effect of both direct force feedback and visual force feedback on the amount of force applied to mitral valve tissue during ex vivo robotics-assisted mitral valve annuloplasty. A force feedback-enabled master-slave surgical system was developed to provide both visual and direct force feedback during robotics-assisted cardiac surgery. This system measured the amount of force applied by novice and expert surgeons to cardiac tissue during ex vivo mitral valve annuloplasty repair. The addition of visual (2.16 ± 1.67), direct (1.62 ± 0.86), or both visual and direct force feedback (2.15 ± 1.08) resulted in lower mean maximum force applied to mitral valve tissue while suturing compared with no force feedback (3.34 ± 1.93 N; P < 0.05). To achieve better control of interaction forces on cardiac tissue during robotics-assisted mitral valve annuloplasty suturing, force feedback may be required. Copyright © 2016 John Wiley & Sons, Ltd.
RGB-D SLAM Combining Visual Odometry and Extended Information Filter
Zhang, Heng; Liu, Yanli; Tan, Jindong; Xiong, Naixue
2015-01-01
In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. PMID:26263990
Visual quality analysis for images degraded by different types of noise
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Ieremeyev, Oleg I.; Egiazarian, Karen O.; Astola, Jaakko T.
2013-02-01
Modern visual quality metrics take into account different peculiarities of the Human Visual System (HVS). One of them is described by the Weber-Fechner law and deals with the different sensitivity to distortions in image fragments with different local mean values (intensity, brightness). We analyze how this property can be incorporated into a metric PSNRHVS- M. It is shown that some improvement of its performance can be provided. Then, visual quality of color images corrupted by three types of i.i.d. noise (pure additive, pure multiplicative, and signal dependent, Poisson) is analyzed. Experiments with a group of observers are carried out for distorted color images created on the basis of TID2008 database. Several modern HVS-metrics are considered. It is shown that even the best metrics are unable to assess visual quality of distorted images adequately enough. The reasons for this deal with the observer's attention to certain objects in the test images, i.e., with semantic aspects of vision, which are worth taking into account in design of HVS-metrics.
Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation
Lusk, Laina G.; Mitchel, Aaron D.
2016-01-01
Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959
Electrophysiological spatiotemporal dynamics during implicit visual threat processing.
DeLaRosa, Bambi L; Spence, Jeffrey S; Shakal, Scott K M; Motes, Michael A; Calley, Clifford S; Calley, Virginia I; Hart, John; Kraut, Michael A
2014-11-01
Numerous studies have found evidence for corticolimbic theta band electroencephalographic (EEG) oscillations in the neural processing of visual stimuli perceived as threatening. However, varying temporal and topographical patterns have emerged, possibly due to varying arousal levels of the stimuli. In addition, recent studies suggest neural oscillations in delta, theta, alpha, and beta-band frequencies play a functional role in information processing in the brain. This study implemented a data-driven PCA based analysis investigating the spatiotemporal dynamics of electroencephalographic delta, theta, alpha, and beta-band frequencies during an implicit visual threat processing task. While controlling for the arousal dimension (the intensity of emotional activation), we found several spatial and temporal differences for threatening compared to nonthreatening visual images. We detected an early posterior increase in theta power followed by a later frontal increase in theta power, greatest for the threatening condition. There was also a consistent left lateralized beta desynchronization for the threatening condition. Our results provide support for a dynamic corticolimbic network, with theta and beta band activity indexing processes pivotal in visual threat processing. Published by Elsevier Inc.
Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets
Gratzl, Samuel; Gehlenborg, Nils; Lex, Alexander; Pfister, Hanspeter; Streit, Marc
2016-01-01
Answering questions about complex issues often requires analysts to take into account information contained in multiple interconnected datasets. A common strategy in analyzing and visualizing large and heterogeneous data is dividing it into meaningful subsets. Interesting subsets can then be selected and the associated data and the relationships between the subsets visualized. However, neither the extraction and manipulation nor the comparison of subsets is well supported by state-of-the-art techniques. In this paper we present Domino, a novel multiform visualization technique for effectively representing subsets and the relationships between them. By providing comprehensive tools to arrange, combine, and extract subsets, Domino allows users to create both common visualization techniques and advanced visualizations tailored to specific use cases. In addition to the novel technique, we present an implementation that enables analysts to manage the wide range of options that our approach offers. Innovative interactive features such as placeholders and live previews support rapid creation of complex analysis setups. We introduce the technique and the implementation using a simple example and demonstrate scalability and effectiveness in a use case from the field of cancer genomics. PMID:26356916
3D Orbit Visualization for Earth-Observing Missions
NASA Technical Reports Server (NTRS)
Jacob, Joseph C.; Plesea, Lucian; Chafin, Brian G.; Weiss, Barry H.
2011-01-01
This software visualizes orbit paths for the Orbiting Carbon Observatory (OCO), but was designed to be general and applicable to any Earth-observing mission. The software uses the Google Earth user interface to provide a visual mechanism to explore spacecraft orbit paths, ground footprint locations, and local cloud cover conditions. In addition, a drill-down capability allows for users to point and click on a particular observation frame to pop up ancillary information such as data product filenames and directory paths, latitude, longitude, time stamp, column-average dry air mole fraction of carbon dioxide, and solar zenith angle. This software can be integrated with the ground data system for any Earth-observing mission to automatically generate daily orbit path data products in Google Earth KML format. These KML data products can be directly loaded into the Google Earth application for interactive 3D visualization of the orbit paths for each mission day. Each time the application runs, the daily orbit paths are encapsulated in a KML file for each mission day since the last time the application ran. Alternatively, the daily KML for a specified mission day may be generated. The application automatically extracts the spacecraft position and ground footprint geometry as a function of time from a daily Level 1B data product created and archived by the mission s ground data system software. In addition, ancillary data, such as the column-averaged dry air mole fraction of carbon dioxide and solar zenith angle, are automatically extracted from a Level 2 mission data product. Zoom, pan, and rotate capability are provided through the standard Google Earth interface. Cloud cover is indicated with an image layer from the MODIS (Moderate Resolution Imaging Spectroradiometer) aboard the Aqua satellite, which is automatically retrieved from JPL s OnEarth Web service.
Evaluation of a novel multi-articulated endoscope: proof of concept through a virtual simulation.
Karvonen, Tuukka; Muranishi, Yusuke; Yamamoto, Goshiro; Kuroda, Tomohiro; Sato, Toshihiko
2017-07-01
In endoscopic surgery such as video-assisted thoracoscopic surgery and laparoscopic surgery, providing the surgeon a good view of the target is important. Rigid endoscope has for years been the go-to tool for this purpose, but it has certain limitations like the inability to work around obstacles. To improve on current tools, a novel multi-articulated endoscope (MAE) is currently under development. To investigate its feasibility and possible value, we performed a user test using virtual prototype of the MAE with the intent to show that it outperforms the conventional endoscope while bringing minimal additional burden to the operator. To evaluate the prototype, we built a virtual model of the MAE and a rigid oblique-viewing endoscope. Through a comparative user study we evaluate the ability of each device to visualize certain targets placed inside the virtual chest cavity by the angle between the visual axis of the scope and the normal of the plane of the target, while accounting for the usability of each endoscope by recording the time taken for each task. In addition, we collected a questionnaire from each participant to obtain feedback. The angles obtained using the MAE were smaller on average ([Formula: see text]), indicating that better visualization can be achieved through the proposed method. A nonsignificant difference in mean time taken for each task in favor of the rigid endoscope was also found ([Formula: see text]). We have demonstrated that better visualization for endoscopic surgery can be achieved through our novel MAE. The scope may bring about a paradigm shift in the field of minimally invasive surgery by providing more freedom in viewpoint selection, enabling surgeons to perform more elaborate procedures in minimally invasive settings.
Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation
Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B.
2016-01-01
Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field. PMID:27853419
Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.
Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B
2016-01-01
Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field.
Multimodal fusion of polynomial classifiers for automatic person recgonition
NASA Astrophysics Data System (ADS)
Broun, Charles C.; Zhang, Xiaozheng
2001-03-01
With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late integration approach, based on a probabilistic model, is employed to combine the two modalities. The system is tested on the XM2VTS database combined with AWGN in the audio domain over a range of signal-to-noise ratios.
Nguyen, Ngan; Mulla, Ali; Nelson, Andrew J; Wilson, Timothy D
2014-01-01
The present study explored the problem-solving strategies of high- and low-spatial visualization ability learners on a novel spatial anatomy task to determine whether differences in strategies contribute to differences in task performance. The results of this study provide further insights into the processing commonalities and differences among learners beyond the classification of spatial visualization ability alone, and help elucidate what, if anything, high- and low-spatial visualization ability learners do differently while solving spatial anatomy task problems. Forty-two students completed a standardized measure of spatial visualization ability, a novel spatial anatomy task, and a questionnaire involving personal self-analysis of the processes and strategies used while performing the spatial anatomy task. Strategy reports revealed that there were different ways students approached answering the spatial anatomy task problems. However, chi-square test analyses established that differences in problem-solving strategies did not contribute to differences in task performance. Therefore, underlying spatial visualization ability is the main source of variation in spatial anatomy task performance, irrespective of strategy. In addition to scoring higher and spending less time on the anatomy task, participants with high spatial visualization ability were also more accurate when solving the task problems. © 2013 American Association of Anatomists.
Sanfratello, Lori; Aine, Cheryl; Stephen, Julia
2018-05-25
Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
GenomeD3Plot: a library for rich, interactive visualizations of genomic data in web applications.
Laird, Matthew R; Langille, Morgan G I; Brinkman, Fiona S L
2015-10-15
A simple static image of genomes and associated metadata is very limiting, as researchers expect rich, interactive tools similar to the web applications found in the post-Web 2.0 world. GenomeD3Plot is a light weight visualization library written in javascript using the D3 library. GenomeD3Plot provides a rich API to allow the rapid visualization of complex genomic data using a convenient standards based JSON configuration file. When integrated into existing web services GenomeD3Plot allows researchers to interact with data, dynamically alter the view, or even resize or reposition the visualization in their browser window. In addition GenomeD3Plot has built in functionality to export any resulting genome visualization in PNG or SVG format for easy inclusion in manuscripts or presentations. GenomeD3Plot is being utilized in the recently released Islandviewer 3 (www.pathogenomics.sfu.ca/islandviewer/) to visualize predicted genomic islands with other genome annotation data. However, its features enable it to be more widely applicable for dynamic visualization of genomic data in general. GenomeD3Plot is licensed under the GNU-GPL v3 at https://github.com/brinkmanlab/GenomeD3Plot/. brinkman@sfu.ca. © The Author 2015. Published by Oxford University Press.
fMRI mapping of the visual system in the mouse brain with interleaved snapshot GE-EPI.
Niranjan, Arun; Christie, Isabel N; Solomon, Samuel G; Wells, Jack A; Lythgoe, Mark F
2016-10-01
The use of functional magnetic resonance imaging (fMRI) in mice is increasingly prevalent, providing a means to non-invasively characterise functional abnormalities associated with genetic models of human diseases. The predominant stimulus used in task-based fMRI in the mouse is electrical stimulation of the paw. Task-based fMRI in mice using visual stimuli remains underexplored, despite visual stimuli being common in human fMRI studies. In this study, we map the mouse brain visual system with BOLD measurements at 9.4T using flashing light stimuli with medetomidine anaesthesia. BOLD responses were observed in the lateral geniculate nucleus, the superior colliculus and the primary visual area of the cortex, and were modulated by the flashing frequency, diffuse vs focussed light and stimulus context. Negative BOLD responses were measured in the visual cortex at 10Hz flashing frequency; but turned positive below 5Hz. In addition, the use of interleaved snapshot GE-EPI improved fMRI image quality without diminishing the temporal contrast-noise-ratio. Taken together, this work demonstrates a novel methodological protocol in which the mouse brain visual system can be non-invasively investigated using BOLD fMRI. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Task-technology fit of video telehealth for nurses in an outpatient clinic setting.
Cady, Rhonda G; Finkelstein, Stanley M
2014-07-01
Incorporating telehealth into outpatient care delivery supports management of consumer health between clinic visits. Task-technology fit is a framework for understanding how technology helps and/or hinders a person during work processes. Evaluating the task-technology fit of video telehealth for personnel working in a pediatric outpatient clinic and providing care between clinic visits ensures the information provided matches the information needed to support work processes. The workflow of advanced practice registered nurse (APRN) care coordination provided via telephone and video telehealth was described and measured using a mixed-methods workflow analysis protocol that incorporated cognitive ethnography and time-motion study. Qualitative and quantitative results were merged and analyzed within the task-technology fit framework to determine the workflow fit of video telehealth for APRN care coordination. Incorporating video telehealth into APRN care coordination workflow provided visual information unavailable during telephone interactions. Despite additional tasks and interactions needed to obtain the visual information, APRN workflow efficiency, as measured by time, was not significantly changed. Analyzed within the task-technology fit framework, the increased visual information afforded by video telehealth supported the assessment and diagnostic information needs of the APRN. Telehealth must provide the right information to the right clinician at the right time. Evaluating task-technology fit using a mixed-methods protocol ensured rigorous analysis of fit within work processes and identified workflows that benefit most from the technology.
Indicators of suboptimal performance embedded in the Wechsler Memory Scale-Fourth Edition (WMS-IV).
Bouman, Zita; Hendriks, Marc P H; Schmand, Ben A; Kessels, Roy P C; Aldenkamp, Albert P
2016-01-01
Recognition and visual working memory tasks from the Wechsler Memory Scale-Fourth Edition (WMS-IV) have previously been documented as useful indicators for suboptimal performance. The present study examined the clinical utility of the Dutch version of the WMS-IV (WMS-IV-NL) for the identification of suboptimal performance using an analogue study design. The patient group consisted of 59 mixed-etiology patients; the experimental malingerers were 50 healthy individuals who were asked to simulate cognitive impairment as a result of a traumatic brain injury; the last group consisted of 50 healthy controls who were instructed to put forth full effort. Experimental malingerers performed significantly lower on all WMS-IV-NL tasks than did the patients and healthy controls. A binary logistic regression analysis was performed on the experimental malingerers and the patients. The first model contained the visual working memory subtests (Spatial Addition and Symbol Span) and the recognition tasks of the following subtests: Logical Memory, Verbal Paired Associates, Designs, Visual Reproduction. The results showed an overall classification rate of 78.4%, and only Spatial Addition explained a significant amount of variation (p < .001). Subsequent logistic regression analysis and receiver operating characteristic (ROC) analysis supported the discriminatory power of the subtest Spatial Addition. A scaled score cutoff of <4 produced 93% specificity and 52% sensitivity for detection of suboptimal performance. The WMS-IV-NL Spatial Addition subtest may provide clinically useful information for the detection of suboptimal performance.
Creativity, visualization abilities, and visual cognitive style.
Kozhevnikov, Maria; Kozhevnikov, Michael; Yu, Chen Jiao; Blazhenkova, Olesya
2013-06-01
Despite the recent evidence for a multi-component nature of both visual imagery and creativity, there have been no systematic studies on how the different dimensions of creativity and imagery might interrelate. The main goal of this study was to investigate the relationship between different dimensions of creativity (artistic and scientific) and dimensions of visualization abilities and styles (object and spatial). In addition, we compared the contributions of object and spatial visualization abilities versus corresponding styles to scientific and artistic dimensions of creativity. Twenty-four undergraduate students (12 females) were recruited for the first study, and 75 additional participants (36 females) were recruited for an additional experiment. Participants were administered a number of object and spatial visualization abilities and style assessments as well as a number of artistic and scientific creativity tests. The results show that object visualization relates to artistic creativity and spatial visualization relates to scientific creativity, while both are distinct from verbal creativity. Furthermore, our findings demonstrate that style predicts corresponding dimension of creativity even after removing shared variance between style and visualization ability. The results suggest that styles might be a more ecologically valid construct in predicting real-life creative behaviour, such as performance in different professional domains. © 2013 The British Psychological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less
Integrating natural language processing and web GIS for interactive knowledge domain visualization
NASA Astrophysics Data System (ADS)
Du, Fangming
Recent years have seen a powerful shift towards data-rich environments throughout society. This has extended to a change in how the artifacts and products of scientific knowledge production can be analyzed and understood. Bottom-up approaches are on the rise that combine access to huge amounts of academic publications with advanced computer graphics and data processing tools, including natural language processing. Knowledge domain visualization is one of those multi-technology approaches, with its aim of turning domain-specific human knowledge into highly visual representations in order to better understand the structure and evolution of domain knowledge. For example, network visualizations built from co-author relations contained in academic publications can provide insight on how scholars collaborate with each other in one or multiple domains, and visualizations built from the text content of articles can help us understand the topical structure of knowledge domains. These knowledge domain visualizations need to support interactive viewing and exploration by users. Such spatialization efforts are increasingly looking to geography and GIS as a source of metaphors and practical technology solutions, even when non-georeferenced information is managed, analyzed, and visualized. When it comes to deploying spatialized representations online, web mapping and web GIS can provide practical technology solutions for interactive viewing of knowledge domain visualizations, from panning and zooming to the overlay of additional information. This thesis presents a novel combination of advanced natural language processing - in the form of topic modeling - with dimensionality reduction through self-organizing maps and the deployment of web mapping/GIS technology towards intuitive, GIS-like, exploration of a knowledge domain visualization. A complete workflow is proposed and implemented that processes any corpus of input text documents into a map form and leverages a web application framework to let users explore knowledge domain maps interactively. This workflow is implemented and demonstrated for a data set of more than 66,000 conference abstracts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conklin, Shane
2013-09-30
Shell space fit out included faculty office advising space, student study space, staff restroom and lobby cafe. Electrical, HVAC and fire alarm installations and upgrades to existing systems were required to support the newly configured spaces. These installations and upgrades included audio/visual equipment, additional electrical outlets and connections to emergency generators. The project provided increased chilled water capacity with the addition of an electric centrifugal chiller. Upgrades associated with chiller included upgrade of exhaust ventilation fan, electrical conductor and breaker upgrades, piping and upgrades to air handling equipment.
HaploForge: a comprehensive pedigree drawing and haplotype visualization web application.
Tekman, Mehmet; Medlar, Alan; Mozere, Monika; Kleta, Robert; Stanescu, Horia
2017-12-15
Haplotype reconstruction is an important tool for understanding the aetiology of human disease. Haplotyping infers the most likely phase of observed genotypes conditional on constraints imposed by the genotypes of other pedigree members. The results of haplotype reconstruction, when visualized appropriately, show which alleles are identical by descent despite the presence of untyped individuals. When used in concert with linkage analysis, haplotyping can help delineate a locus of interest and provide a succinct explanation for the transmission of the trait locus. Unfortunately, the design choices made by existing haplotype visualization programs do not scale to large numbers of markers. Indeed, following haplotypes from generation to generation requires excessive scrolling back and forth. In addition, the most widely used program for haplotype visualization produces inconsistent recombination artefacts for the X chromosome. To resolve these issues, we developed HaploForge, a novel web application for haplotype visualization and pedigree drawing. HaploForge takes advantage of HTML5 to be fast, portable and avoid the need for local installation. It can accurately visualize autosomal and X-linked haplotypes from both outbred and consanguineous pedigrees. Haplotypes are coloured based on identity by descent using a novel A* search algorithm and we provide a flexible viewing mode to aid visual inspection. HaploForge can currently process haplotype reconstruction output from Allegro, GeneHunter, Merlin and Simwalk. HaploForge is licensed under GPLv3 and is hosted and maintained via GitHub. https://github.com/mtekman/haploforge. r.kleta@ucl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Pedrotti, Emilio; Carones, Francesco; Aiello, Francesco; Mastropasqua, Rodolfo; Bruni, Enrico; Bonacci, Erika; Talli, Pietro; Nucci, Carlo; Mariotti, Cesare; Marchini, Giorgio
2018-02-01
To compare the visual acuity, refractive outcomes, and quality of vision in patients with bilateral implantation of 4 intraocular lenses (IOLs). Department of Neurosciences, Biomedicine and Movement Sciences, Eye Clinic, University of Verona, Verona, and Carones Ophthalmology Center, Milano, Italy. Prospective case series. The study included patients who had bilateral cataract surgery with the implantation of 1 of 4 IOLs as follows: Tecnis 1-piece monofocal (monofocal IOL), Tecnis Symfony extended range of vision (extended-range-of-vision IOL), Restor +2.5 diopter (D) (+2.5 D multifocal IOL), and Restor +3.0 D (+3.0 D multifocal IOL). Visual acuity, refractive outcome, defocus curve, objective optical quality, contrast sensitivity, spectacle independence, and glare perception were evaluated 6 months after surgery. The study comprised 185 patients. The extended-range-of-vision IOL (55 patients) showed better distance visual outcomes than the monofocal IOL (30 patients) and high-addition apodized diffractive-refractive multifocal IOLs (P ≤ .002). The +3.0 D multifocal IOL (50 patients) showed the best near visual outcomes (P < .001). The +2.5 D multifocal IOL (50 patients) and extended-range-of-vision IOL provided significantly better intermediate visual outcomes than the other 2 IOLs, with significantly better vision for a defocus level of -1.5 D (P < .001). Better spectacle independence was shown for the +2.5 D multifocal IOL and extended-range-of-vision IOL (P < .001). The extended-range-of-vision IOL and +2.5 D multifocal IOL provided significantly better intermediate visual restoration after cataract surgery than the monofocal IOL and +3.0 D multifocal IOL, with significantly better quality of vision with the extended-range-of-vision IOL. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Coastal On-line Assessment and Synthesis Tool 2.0
NASA Technical Reports Server (NTRS)
Brown, Richard; Navard, Andrew; Nguyen, Beth
2011-01-01
COAST (Coastal On-line Assessment and Synthesis Tool) is a 3D, open-source Earth data browser developed by leveraging and enhancing previous NASA open-source tools. These tools use satellite imagery and elevation data in a way that allows any user to zoom from orbit view down into any place on Earth, and enables the user to experience Earth terrain in a visually rich 3D view. The benefits associated with taking advantage of an open-source geo-browser are that it is free, extensible, and offers a worldwide developer community that is available to provide additional development and improvement potential. What makes COAST unique is that it simplifies the process of locating and accessing data sources, and allows a user to combine them into a multi-layered and/or multi-temporal visual analytical look into possible data interrelationships and coeffectors for coastal environment phenomenology. COAST provides users with new data visual analytic capabilities. COAST has been upgraded to maximize use of open-source data access, viewing, and data manipulation software tools. The COAST 2.0 toolset has been developed to increase access to a larger realm of the most commonly implemented data formats used by the coastal science community. New and enhanced functionalities that upgrade COAST to COAST 2.0 include the development of the Temporal Visualization Tool (TVT) plug-in, the Recursive Online Remote Data-Data Mapper (RECORD-DM) utility, the Import Data Tool (IDT), and the Add Points Tool (APT). With these improvements, users can integrate their own data with other data sources, and visualize the resulting layers of different data types (such as spatial and spectral, for simultaneous visual analysis), and visualize temporal changes in areas of interest.
Visual aid tool to improve decision making in acute stroke care.
Saposnik, Gustavo; Goyal, Mayank; Majoie, Charles; Dippel, Diederik; Roos, Yvo; Demchuk, Andrew; Menon, Bijoy; Mitchell, Peter; Campbell, Bruce; Dávalos, Antoni; Jovin, Tudor; Hill, Michael D
2016-10-01
Background Acute stroke care represents a challenge for decision makers. Recent randomized trials showed the benefits of endovascular therapy. Our goal was to provide a visual aid tool to guide clinicians in the decision process of endovascular intervention in patients with acute ischemic stroke. Methods We created visual plots (Cates' plots; www.nntonline.net ) representing benefits of standard of care vs. endovascular thrombectomy from the pooled analysis of five RCTs using stent retrievers. These plots represent the following clinically relevant outcomes (1) functionally independent state (modified Rankin scale (mRS) 0 to 2 at 90 days) (2) excellent recovery (mRS 0-1) at 90 days, (3) NIHSS 0-2 (4) early neurological recovery, and (5) revascularization at 24 h. Subgroups visually represented include time to treatment and baseline stroke severity strata. Results Overall, 1287 patients (634 assigned to endovascular thrombectomy, 653 assigned to control were included to create the visual plots. Cates' visual plots revealed that for every 100 patients with acute ischemic stroke and large vessel occlusion, 27 would achieve independence at 90 days (mRS 0-2) in the control group compared to 49 (95% CI 43-56) in the intervention group. Similarly, 21 patients would achieve early neurological recovery at 24 h compared to 54 (95% CI 45-63) out of 100 for the intervention group. Conclusion Cates' plots may assist clinicians and patients to visualize and compare potential outcomes after an acute ischemic stroke. Our results suggest that for every 100 treated individuals with an acute ischemic stroke and a large vessel occlusion, endovascular thrombectomy would provide 22 additional patients reaching independency at three months and 33 more patients achieving ENR compared to controls.
WebGIVI: a web-based gene enrichment analysis and visualization tool.
Sun, Liang; Zhu, Yongnan; Mahmood, A S M Ashique; Tudor, Catalina O; Ren, Jia; Vijay-Shanker, K; Chen, Jian; Schmidt, Carl J
2017-05-04
A major challenge of high throughput transcriptome studies is presenting the data to researchers in an interpretable format. In many cases, the outputs of such studies are gene lists which are then examined for enriched biological concepts. One approach to help the researcher interpret large gene datasets is to associate genes and informative terms (iTerm) that are obtained from the biomedical literature using the eGIFT text-mining system. However, examining large lists of iTerm and gene pairs is a daunting task. We have developed WebGIVI, an interactive web-based visualization tool ( http://raven.anr.udel.edu/webgivi/ ) to explore gene:iTerm pairs. WebGIVI was built via Cytoscape and Data Driven Document JavaScript libraries and can be used to relate genes to iTerms and then visualize gene and iTerm pairs. WebGIVI can accept a gene list that is used to retrieve the gene symbols and corresponding iTerm list. This list can be submitted to visualize the gene iTerm pairs using two distinct methods: a Concept Map or a Cytoscape Network Map. In addition, WebGIVI also supports uploading and visualization of any two-column tab separated data. WebGIVI provides an interactive and integrated network graph of gene and iTerms that allows filtering, sorting, and grouping, which can aid biologists in developing hypothesis based on the input gene lists. In addition, WebGIVI can visualize hundreds of nodes and generate a high-resolution image that is important for most of research publications. The source code can be freely downloaded at https://github.com/sunliang3361/WebGIVI . The WebGIVI tutorial is available at http://raven.anr.udel.edu/webgivi/tutorial.php .
Low-speed Aerodynamic Investigations of a Hybrid Wing Body Configuration
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.; Gatlin, Gregory M.; Jenkins, Luther N.; Murphy, Patrick C.; Carter, Melissa B.
2014-01-01
Two low-speed static wind tunnel tests and a water tunnel static and dynamic forced-motion test have been conducted on a hybrid wing-body (HWB) twinjet configuration. These tests, in addition to computational fluid dynamics (CFD) analysis, have provided a comprehensive dataset of the low-speed aerodynamic characteristics of this nonproprietary configuration. In addition to force and moment measurements, the tests included surface pressures, flow visualization, and off-body particle image velocimetry measurements. This paper will summarize the results of these tests and highlight the data that is available for code comparison or additional analysis.
Austin, John H. M.; Hogg, James C.; Grenier, Philippe A.; Kauczor, Hans-Ulrich; Bankier, Alexander A.; Barr, R. Graham; Colby, Thomas V.; Galvin, Jeffrey R.; Gevenois, Pierre Alain; Coxson, Harvey O.; Hoffman, Eric A.; Newell, John D.; Pistolesi, Massimo; Silverman, Edwin K.; Crapo, James D.
2015-01-01
The purpose of this statement is to describe and define the phenotypic abnormalities that can be identified on visual and quantitative evaluation of computed tomographic (CT) images in subjects with chronic obstructive pulmonary disease (COPD), with the goal of contributing to a personalized approach to the treatment of patients with COPD. Quantitative CT is useful for identifying and sequentially evaluating the extent of emphysematous lung destruction, changes in airway walls, and expiratory air trapping. However, visual assessment of CT scans remains important to describe patterns of altered lung structure in COPD. The classification system proposed and illustrated in this article provides a structured approach to visual and quantitative assessment of COPD. Emphysema is classified as centrilobular (subclassified as trace, mild, moderate, confluent, and advanced destructive emphysema), panlobular, and paraseptal (subclassified as mild or substantial). Additional important visual features include airway wall thickening, inflammatory small airways disease, tracheal abnormalities, interstitial lung abnormalities, pulmonary arterial enlargement, and bronchiectasis. © RSNA, 2015 PMID:25961632
Duration estimates within a modality are integrated sub-optimally
Cai, Ming Bo; Eagleman, David M.
2015-01-01
Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965
The clinical use of dynamic posturography in the elderly.
Shepard, N T
1989-12-01
We provide an overview of the clinical uses of dynamic posturography. Although the equipment described to perform this testing is expensive, the concepts, especially those for sensory organization, can be applied for +20.00. To apply the six sensory organization conditions, one merely needs some way to disrupt proprioceptive information by maintaining ankle angle and providing for visual conflict stimuli. We found that proprioceptive information can be disrupted easily by asking the patient to stand on a thick (4-inch) dense piece of foam rubber like that used in cushions for furniture. Visual stabilization conflict can be provided by having the patient wear a 19- to 20-inch Japanese lantern with a head-mounting system in the center so that the patient's movements do not reflect themselves in relative movements to the visual environment. With use of these two simple tools, the six sensory organization tests can be approximated in a clinical situation in a short time and can provide some relative information about a patient's postural control capabilities. With minor additional work, a quantitative measure of output that gives indications of the amount of anterior-posterior sway also can be provided. For elderly patients with a variety of problems ranging from general unsteadiness to frank vertigo, the risk of falling can be devastating, and it is important to provide a thorough investigation of the total balance system. The systematic investigation, qualitatively or quantitatively, of integration of sensory input and motor outputs provides a dimension that typically has been lacking in the routine "dizzy patient workup" for all ages but especially for elderly patients. Therefore, the application of the postural maintenance theory with the above-described procedures or variations in these procedures appears to have a great deal of clinical relevance in the evaluation of patients with gait and balance disorders. These types of evaluations represent an adjunct or addition to the evaluation of the vestibular system and the vestibulo-ocular reflexes and by no means should be considered a substitute for that traditional evaluation. It is the combination of information that can provide the clinician with a more global picture of the entire balance system and its functional capabilities.
Ramón, María L; Piñero, David P; Pérez-Cambrodí, Rafael J
2012-02-01
To examine the visual performance of a rotationally asymmetric multifocal intraocular lens (IOL) by correlating the defocus curve of the IOL-implanted eye with the intraocular aberrometric profile and impact on the quality of life. A prospective, consecutive, case series study including 26 eyes from 13 patients aged between 50 and 83 years (mean: 65.54±7.59 years) was conducted. All patients underwent bilateral cataract surgery with implantation of a rotationally asymmetric multifocal IOL (Lentis Mplus LS-312 MF30, Oculentis GmbH). Distance and near visual acuity outcomes, intraocular aberrations, defocus curve, and quality of life (assessed using the National Eye Institute Visual Functioning Questionnaire-25) were evaluated postoperatively (mean follow-up: 6.42±2.24 months). A significant improvement in distance visual acuity was found postoperatively (P<.01). Mean postoperative logMAR distance-corrected near visual acuity was 0.19±0.12 (∼20/30). Corrected distance visual acuity and near visual acuity of 20/20 or better were achieved by 30.8% and 7.7% of eyes, respectively. Of all eyes, 96.2% had a postoperative addition between 0 and 1.00 diopter (D). The defocus curve showed two peaks of maximum visual acuity (0 and 3.00 D of defocus), with an acceptable range of intermediate vision. LogMAR visual acuity corresponding to near defocus was directly correlated with some higher order intraocular aberrations (r⩾0.44, P⩽.04). Some difficulties evaluated with the quality of life test correlated directly with near and intermediate visual acuity (r⩾0.50, P⩽.01). The Lentis Mplus multifocal IOL provides good distance, intermediate, and near visual outcomes; however, the induced intraocular aberrometric profile may limit the potential visual benefit. Copyright 2012, SLACK Incorporated.
Does It Really Matter Where You Look When Walking on Stairs? Insights from a Dual-Task Study
Miyasike-daSilva, Veronica; McIlroy, William E.
2012-01-01
Although the visual system is known to provide relevant information to guide stair locomotion, there is less understanding of the specific contributions of foveal and peripheral visual field information. The present study investigated the specific role of foveal vision during stair locomotion and ground-stairs transitions by using a dual-task paradigm to influence the ability to rely on foveal vision. Fifteen healthy adults (26.9±3.3 years; 8 females) ascended a 7-step staircase under four conditions: no secondary tasks (CONTROL); gaze fixation on a fixed target located at the end of the pathway (TARGET); visual reaction time task (VRT); and auditory reaction time task (ART). Gaze fixations towards stair features were significantly reduced in TARGET and VRT compared to CONTROL and ART. Despite the reduced fixations, participants were able to successfully ascend stairs and rarely used the handrail. Step time was increased during VRT compared to CONTROL in most stair steps. Navigating on the transition steps did not require more gaze fixations than the middle steps. However, reaction time tended to increase during locomotion on transitions suggesting additional executive demands during this phase. These findings suggest that foveal vision may not be an essential source of visual information regarding stair features to guide stair walking, despite the unique control challenges at transition phases as highlighted by phase-specific challenges in dual-tasking. Instead, the tendency to look at the steps in usual conditions likely provides a stable reference frame for extraction of visual information regarding step features from the entire visual field. PMID:22970297
pySeismicDQA: open source post experiment data quality assessment and processing
NASA Astrophysics Data System (ADS)
Polkowski, Marcin
2017-04-01
Seismic Data Quality Assessment is python based, open source set of tools dedicated for data processing after passive seismic experiments. Primary goal of this toolset is unification of data types and formats from different dataloggers necessary for further processing. This process requires additional data checks for errors, equipment malfunction, data format errors, abnormal noise levels, etc. In all such cases user needs to decide (manually or by automatic threshold) if data is removed from output dataset. Additionally, output dataset can be visualized in form of website with data availability charts and waveform visualization with earthquake catalog (external). Data processing can be extended with simple STA/LTA event detection. pySeismicDQA is designed and tested for two passive seismic experiments in central Europe: PASSEQ 2006-2008 and "13 BB Star" (2013-2016). National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.
Embedding, serial sectioning and staining of zebrafish embryos using JB-4 resin.
Sullivan-Brown, Jessica; Bisher, Margaret E; Burdine, Rebecca D
2011-01-01
Histological techniques are critical for observing tissue and cellular morphology. In this paper, we outline our protocol for embedding, serial sectioning, staining and visualizing zebrafish embryos embedded in JB-4 plastic resin-a glycol methacrylate-based medium that results in excellent preservation of tissue morphology. In addition, we describe our procedures for staining plastic sections with toluidine blue or hematoxylin and eosin, and show how to couple these stains with whole-mount RNA in situ hybridization. We also describe how to maintain and visualize immunofluorescence and EGFP signals in JB-4 resin. The protocol we outline-from embryo preparation, embedding, sectioning and staining to visualization-can be accomplished in 3 d. Overall, we reinforce that plastic embedding can provide higher resolution of cellular details and is a valuable tool for cellular and morphological studies in zebrafish.
Caudell, Thomas P; Xiao, Yunhai; Healy, Michael J
2003-01-01
eLoom is an open source graph simulation software tool, developed at the University of New Mexico (UNM), that enables users to specify and simulate neural network models. Its specification language and libraries enables users to construct and simulate arbitrary, potentially hierarchical network structures on serial and parallel processing systems. In addition, eLoom is integrated with UNM's Flatland, an open source virtual environments development tool to provide real-time visualizations of the network structure and activity. Visualization is a useful method for understanding both learning and computation in artificial neural networks. Through 3D animated pictorially representations of the state and flow of information in the network, a better understanding of network functionality is achieved. ART-1, LAPART-II, MLP, and SOM neural networks are presented to illustrate eLoom and Flatland's capabilities.
Wang, Yinghua; Yan, Jiaqing; Wen, Jianbin; Yu, Tao; Li, Xiaoli
2016-01-01
Before epilepsy surgeries, intracranial electroencephalography (iEEG) is often employed in function mapping and epileptogenic foci localization. Although the implanted electrodes provide crucial information for epileptogenic zone resection, a convenient clinical tool for electrode position registration and Brain Function Mapping (BFM) visualization is still lacking. In this study, we developed a BFM Tool, which facilitates electrode position registration and BFM visualization, with an application to epilepsy surgeries. The BFM Tool mainly utilizes electrode location registration and function mapping based on pre-defined brain models from other software. In addition, the electrode node and mapping properties, such as the node size/color, edge color/thickness, mapping method, can be adjusted easily using the setting panel. Moreover, users may manually import/export location and connectivity data to generate figures for further application. The role of this software is demonstrated by a clinical study of language area localization. The BFM Tool helps clinical doctors and researchers visualize implanted electrodes and brain functions in an easy, quick and flexible manner. Our tool provides convenient electrode registration, easy brain function visualization, and has good performance. It is clinical-oriented and is easy to deploy and use. The BFM tool is suitable for epilepsy and other clinical iEEG applications.
Wang, Yinghua; Yan, Jiaqing; Wen, Jianbin; Yu, Tao; Li, Xiaoli
2016-01-01
Objects: Before epilepsy surgeries, intracranial electroencephalography (iEEG) is often employed in function mapping and epileptogenic foci localization. Although the implanted electrodes provide crucial information for epileptogenic zone resection, a convenient clinical tool for electrode position registration and Brain Function Mapping (BFM) visualization is still lacking. In this study, we developed a BFM Tool, which facilitates electrode position registration and BFM visualization, with an application to epilepsy surgeries. Methods: The BFM Tool mainly utilizes electrode location registration and function mapping based on pre-defined brain models from other software. In addition, the electrode node and mapping properties, such as the node size/color, edge color/thickness, mapping method, can be adjusted easily using the setting panel. Moreover, users may manually import/export location and connectivity data to generate figures for further application. The role of this software is demonstrated by a clinical study of language area localization. Results: The BFM Tool helps clinical doctors and researchers visualize implanted electrodes and brain functions in an easy, quick and flexible manner. Conclusions: Our tool provides convenient electrode registration, easy brain function visualization, and has good performance. It is clinical-oriented and is easy to deploy and use. The BFM tool is suitable for epilepsy and other clinical iEEG applications. PMID:27199729
Matisse: A Visual Analytics System for Exploring Emotion Trends in Social Media Text Streams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Drouhard, Margaret MEG G; Beaver, Justin M
Dynamically mining textual information streams to gain real-time situational awareness is especially challenging with social media systems where throughput and velocity properties push the limits of a static analytical approach. In this paper, we describe an interactive visual analytics system, called Matisse, that aids with the discovery and investigation of trends in streaming text. Matisse addresses the challenges inherent to text stream mining through the following technical contributions: (1) robust stream data management, (2) automated sentiment/emotion analytics, (3) interactive coordinated visualizations, and (4) a flexible drill-down interaction scheme that accesses multiple levels of detail. In addition to positive/negative sentiment prediction,more » Matisse provides fine-grained emotion classification based on Valence, Arousal, and Dominance dimensions and a novel machine learning process. Information from the sentiment/emotion analytics are fused with raw data and summary information to feed temporal, geospatial, term frequency, and scatterplot visualizations using a multi-scale, coordinated interaction model. After describing these techniques, we conclude with a practical case study focused on analyzing the Twitter sample stream during the week of the 2013 Boston Marathon bombings. The case study demonstrates the effectiveness of Matisse at providing guided situational awareness of significant trends in social media streams by orchestrating computational power and human cognition.« less
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2009-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2010-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
Pathview Web: user friendly pathway visualization and data integration.
Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory
2017-07-03
Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Bell, Sherry Mee; McCallum, R Steve; Cox, Elizabeth A
2003-01-01
One hundred five participants from a random sample of elementary and middle school children completed measures of reading achievement and cognitive abilities presumed, based on a synthesis of current dyslexia research, to underlie reading. Factor analyses of these cognitive variables (including auditory processing, phonological awareness, short-term auditory memory, visual memory, rapid automatized naming, and visual processing speed) produced three empirically and theoretically derived factors (auditory processing, visual processing/speed, and memory), each of which contributed to the prediction of reading and spelling skills. Factor scores from the three factors combined predicted 85% of the variance associated with letter/sight word naming, 70% of the variance associated with reading comprehension, 73% for spelling, and 61% for phonetic decoding. The auditory processing factor was the strongest predictor, accounting for 27% to 43% of the variance across the different achievement areas. The results provide practitioner and researcher with theoretical and empirical support for the inclusion of measures of the three factors, in addition to specific measures of reading achievement, in a standardized assessment of dyslexia. Guidelines for a thorough, research-based assessment are provided.
Tebbutt, G. M.
1991-01-01
The relationship between visual inspections carried out by environmental health officers and microbiological examination was studied in 89 restaurants. Using 30 variables a standardized inspection procedure was developed and each of the premises was assessed in six main areas-structure and design, cleaning and cleanliness, personal hygiene, risk of contamination, temperature control, and training and knowledge about food hygiene. Selected foods and specimens from hands, surfaces, and wiping cloths were examined. There were significant associations between all six areas of the inspections. The structure and design were significantly related to the combined score from all the other areas (P less than 0.001). There were no highly significant associations between microbiological examination and visual assessments. The microbial contamination of wiping cloths, however, was related to the cleaning and cleanliness (P = 0.005). Microbial sampling provided additional information to inspections and was a valuable aid. Further development of this risk-assessment approach could provide an effective system for monitoring potential health risks in high-risk food premises. PMID:1936161
Virtual GEOINT Center: C2ISR through an avatar's eyes
NASA Astrophysics Data System (ADS)
Seibert, Mark; Tidbal, Travis; Basil, Maureen; Muryn, Tyler; Scupski, Joseph; Williams, Robert
2013-05-01
As the number of devices collecting and sending data in the world are increasing, finding ways to visualize and understand that data is becoming more and more of a problem. This has often been coined as the problem of "Big Data." The Virtual Geoint Center (VGC) aims to aid in solving that problem by providing a way to combine the use of the virtual world with outside tools. Using open-source software such as OpenSim and Blender, the VGC uses a visually stunning 3D environment to display the data sent to it. The VGC is broken up into two major components: The Kinect Minimap, and the Geoint Map. The Kinect Minimap uses the Microsoft Kinect and its open-source software to make a miniature display of people the Kinect detects in front of it. The Geoint Map collect smartphone sensor information from online databases and displays them in real time onto a map generated by Google Maps. By combining outside tools and the virtual world, the VGC can help a user "visualize" data, and provide additional tools to "understand" the data.
Understanding the Role of the Modality Principle in Multimedia Learning Environments
ERIC Educational Resources Information Center
Oberfoell, A.; Correia, A.
2016-01-01
The modality principle states that low-experience learners more successfully understand information that uses narration rather than on-screen text. This is due to the idea that on-screen text may produce a cognitive overload if it is accompanied by other visual elements. Other studies provided additional data and support for the modality principle…
A Study of Multifunctional Document Centers that Are Accessible to People Who Are Visually Impaired
ERIC Educational Resources Information Center
Huffman, Lee A.; Uslan, Mark M.; Burton, Darren M.; Eghtesadi, Caesar
2009-01-01
The capabilities of modern photocopy machines have advanced beyond the simple duplication of documents. In addition to the standard functions of copying, collating, and stapling, such machines can be a part of telecommunication networks and provide printing, scanning, faxing, and e-mailing functions. No longer just copy machines, these devices are…
ERIC Educational Resources Information Center
Bybee, Jacquelyn; Cavenaugh, Brenda S.
2013-01-01
The Randolph-Sheppard Business Enterprise Program provides employment for more than 2,300 entrepreneurs who are legally blind across the United States. Moreover, these entrepreneurs employ an additional 14,000 people, of whom almost 2,000 are visually impaired or have other disabilities (Rehabilitation Services Administration [RSA], 2010). With…
Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B; Dimas, Antigone S; Gutierrez-Arcelus, Maria; Stranger, Barbara E; Deloukas, Panos; Dermitzakis, Emmanouil T
2010-10-01
Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. http://www.sanger.ac.uk/resources/software/genevar.
Development and Application of PIV in Supersonic flows
NASA Astrophysics Data System (ADS)
Rong, Z.; Liu, H.; Chen, F.
2011-09-01
This paper presents PIV measurements obtained in Mach 4.0 flowfields performed in the SJTU Hypersonic wind tunnel (HWT). In order to certificate this technique, PIV experiments were conducted to the empty test section to provide uniform flow data for comparison with analysis data. Dynamical properties of particle tracers were investigated to measure the particle response across an oblique shock wave. The flow over a sharp cone at Ma = 4.0 were tested in comparasion with the CFD and schlieren visualization. It is shown that shock wave angles measured with PIV are in good agreement with theory and schlieren visualization, in addition the overall flow is consistent with the CFD results.
A Review of Research on the Literacy of Students with Visual Impairments and Additional Disabilities
ERIC Educational Resources Information Center
Parker, Amy T.; Pogrund, Rona L.
2009-01-01
Research on the development of literacy in children with visual impairments and additional disabilities is minimal even though these children make up approximately 65% of the population of children with visual impairments. This article reports on emerging themes that were explored after a review of the literature revealed nine literacy studies…
Ragan, Eric D.; Bowman, Doug A.; Kopper, Regis; ...
2015-02-13
Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. The paper presents a framework for evaluating the effects of virtual reality fidelity based on an analysis of a simulation’s display, interaction, and scenario components. Following this framework, we conducted a controlled experiment to test the effects of fidelity on training effectiveness for a visual scanning task. The experiment varied the levels of field of view and visual realism during a training phase and then evaluated scanning performance with the simulator’s highestmore » level of fidelity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual realism significantly affected target detection during training; higher field of view led to better performance and higher visual realism worsened performance. Additionally, the level of visual realism during training significantly affected learning of the prescribed visual scanning strategy, providing evidence that high visual realism was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training—evaluation in a more realistic setting may be necessary.« less
Visual just noticeable differences
NASA Astrophysics Data System (ADS)
Nankivil, Derek; Chen, Minghan; Wooley, C. Benjamin
2018-02-01
A visual just noticeable difference (VJND) is the amount of change in either an image (e.g. a photographic print) or in vision (e.g. due to a change in refractive power of a vision correction device or visually coupled optical system) that is just noticeable when compared with the prior state. Numerous theoretical and clinical studies have been performed to determine the amount of change in various visual inputs (power, spherical aberration, astigmatism, etc.) that result in a just noticeable visual change. Each of these approaches, in defining a VJND, relies on the comparison of two visual stimuli. The first stimulus is the nominal or baseline state and the second is the perturbed state that results in a VJND. Using this commonality, we converted each result to the change in the area of the modulation transfer function (AMTF) to provide a more fundamental understanding of what results in a VJND. We performed an analysis of the wavefront criteria from basic optics, the image quality metrics, and clinical studies testing various visual inputs, showing that fractional changes in AMTF resulting in one VJND range from 0.025 to 0.075. In addition, cycloplegia appears to desensitize the human visual system so that a much larger change in the retinal image is required to give a VJND. This finding may be of great import for clinical vision tests. Finally, we present applications of the VJND model for the determination of threshold ocular aberrations and manufacturing tolerances of visually coupled optical systems.
Social inequalities in blindness and visual impairment: A review of social determinants
Ulldemolins, Anna Rius; Lansingh, Van C; Valencia, Laura Guisasola; Carter, Marissa J; Eckert, Kristen A
2012-01-01
Health inequities are related to social determinants based on gender, socioeconomic status, ethnicity, race, living in a specific geographic region, or having a specific health condition. Such inequities were reviewed for blindness and visual impairment by searching for studies on the subject in PubMed from 2000 to 2011 in the English and Spanish languages. The goal of this article is to provide a current review in understanding how inequities based specifically on the aforementioned social determinants on health influence the prevalence of visual impairment and blindness. With regards to gender inequality, women have a higher prevalence of visual impairment and blindness, which cannot be only reasoned based on age or access to service. Socioeconomic status measured as higher income, higher educational status, or non-manual occupational social class was inversely associated with prevalence of blindness or visual impairment. Ethnicity and race were associated with visual impairment and blindness, although there is general confusion over this socioeconomic position determinant. Geographic inequalities and visual impairment were related to income (of the region, nation or continent), living in a rural area, and an association with socioeconomic and political context was suggested. While inequalities related to blindness and visual impairment have rarely been specifically addressed in research, there is still evidence of the association of social determinants and prevalence of blindness and visual impairment. Additional research should be done on the associations with intermediary determinants and socioeconomic and political context. PMID:22944744
Unique Temporal Expression of Triplicated Long-Wavelength Opsins in Developing Butterfly Eyes
Arikawa, Kentaro; Iwanaga, Tomoyuki; Wakakuwa, Motohiro; Kinoshita, Michiyo
2017-01-01
Following gene duplication events, the expression patterns of the resulting gene copies can often diverge both spatially and temporally. Here we report on gene duplicates that are expressed in distinct but overlapping patterns, and which exhibit temporally divergent expression. Butterflies have sophisticated color vision and spectrally complex eyes, typically with three types of heterogeneous ommatidia. The eyes of the butterfly Papilio xuthus express two green- and one red-absorbing visual pigment, which came about via gene duplication events, in addition to one ultraviolet (UV)- and one blue-absorbing visual pigment. We localized mRNAs encoding opsins of these visual pigments in developing eye disks throughout the pupal stage. The mRNAs of the UV and blue opsin are expressed early in pupal development (pd), specifying the type of the ommatidium in which they appear. Red sensitive photoreceptors first express a green opsin mRNA, which is replaced later by the red opsin mRNA. Broadband photoreceptors (that coexpress the green and red opsins) first express the green opsin mRNA, later change to red opsin mRNA and finally re-express the green opsin mRNA in addition to the red mRNA. Such a unique temporal and spatial expression pattern of opsin mRNAs may reflect the evolution of visual pigments and provide clues toward understanding how the spectrally complex eyes of butterflies evolved. PMID:29238294
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strons, Philip; Bailey, James L.
Anemometer readings alone cannot provide a complete picture of air flow patterns at an open gloveport. Having a means to visualize air flow for field tests in general provides greater insight by indicating direction in addition to the magnitude of the air flow velocities in the region of interest. Furthermore, flow visualization is essential for Computational Fluid Dynamics (CFD) verification, where important modeling assumptions play a significant role in analyzing the chaotic nature of low-velocity air flow. A good example is shown Figure 1, where an unexpected vortex pattern occurred during a field test that could not have been measuredmore » relying only on anemometer readings. Here by, observing and measuring the patterns of the smoke flowing into the gloveport allowed the CFD model to be appropriately updated to match the actual flow velocities in both magnitude and direction.« less
The ALIVE Project: Astronomy Learning in Immersive Virtual Environments
NASA Astrophysics Data System (ADS)
Yu, K. C.; Sahami, K.; Denn, G.
2008-06-01
The Astronomy Learning in Immersive Virtual Environments (ALIVE) project seeks to discover learning modes and optimal teaching strategies using immersive virtual environments (VEs). VEs are computer-generated, three-dimensional environments that can be navigated to provide multiple perspectives. Immersive VEs provide the additional benefit of surrounding a viewer with the simulated reality. ALIVE evaluates the incorporation of an interactive, real-time ``virtual universe'' into formal college astronomy education. In the experiment, pre-course, post-course, and curriculum tests will be used to determine the efficacy of immersive visualizations presented in a digital planetarium versus the same visual simulations in the non-immersive setting of a normal classroom, as well as a control case using traditional classroom multimedia. To normalize for inter-instructor variability, each ALIVE instructor will teach at least one of each class in each of the three test groups.
Multimodal assessment of visual attention using the Bethesda Eye & Attention Measure (BEAM).
Ettenhofer, Mark L; Hershaw, Jamie N; Barry, David M
2016-01-01
Computerized cognitive tests measuring manual response time (RT) and errors are often used in the assessment of visual attention. Evidence suggests that saccadic RT and errors may also provide valuable information about attention. This study was conducted to examine a novel approach to multimodal assessment of visual attention incorporating concurrent measurements of saccadic eye movements and manual responses. A computerized cognitive task, the Bethesda Eye & Attention Measure (BEAM) v.34, was designed to evaluate key attention networks through concurrent measurement of saccadic and manual RT and inhibition errors. Results from a community sample of n = 54 adults were analyzed to examine effects of BEAM attention cues on manual and saccadic RT and inhibition errors, internal reliability of BEAM metrics, relationships between parallel saccadic and manual metrics, and relationships of BEAM metrics to demographic characteristics. Effects of BEAM attention cues (alerting, orienting, interference, gap, and no-go signals) were consistent with previous literature examining key attention processes. However, corresponding saccadic and manual measurements were weakly related to each other, and only manual measurements were related to estimated verbal intelligence or years of education. This study provides preliminary support for the feasibility of multimodal assessment of visual attention using the BEAM. Results suggest that BEAM saccadic and manual metrics provide divergent measurements. Additional research will be needed to obtain comprehensive normative data, to cross-validate BEAM measurements with other indicators of neural and cognitive function, and to evaluate the utility of these metrics within clinical populations of interest.
Reverse phase protein arrays in signaling pathways: a data integration perspective
Creighton, Chad J; Huang, Shixia
2015-01-01
The reverse phase protein array (RPPA) data platform provides expression data for a prespecified set of proteins, across a set of tissue or cell line samples. Being able to measure either total proteins or posttranslationally modified proteins, even ones present at lower abundances, RPPA represents an excellent way to capture the state of key signaling transduction pathways in normal or diseased cells. RPPA data can be combined with those of other molecular profiling platforms, in order to obtain a more complete molecular picture of the cell. This review offers perspective on the use of RPPA as a component of integrative molecular analysis, using recent case examples from The Cancer Genome Altas consortium, showing how RPPA may provide additional insight into cancer besides what other data platforms may provide. There also exists a clear need for effective visualization approaches to RPPA-based proteomic results; this was highlighted by the recent challenge, put forth by the HPN-DREAM consortium, to develop visualization methods for a highly complex RPPA dataset involving many cancer cell lines, stimuli, and inhibitors applied over time course. In this review, we put forth a number of general guidelines for effective visualization of complex molecular datasets, namely, showing the data, ordering data elements deliberately, enabling generalization, focusing on relevant specifics, and putting things into context. We give examples of how these principles can be utilized in visualizing the intrinsic subtypes of breast cancer and in meaningfully displaying the entire HPN-DREAM RPPA dataset within a single page. PMID:26185419
Strasser, T; Peters, T; Jagle, H; Zrenner, E; Wilke, R
2010-01-01
Electrophysiology of vision - especially the electroretinogram (ERG) - is used as a non-invasive way for functional testing of the visual system. The ERG is a combined electrical response generated by neural and non-neuronal cells in the retina in response to light stimulation. This response can be recorded and used for diagnosis of numerous disorders. For both clinical practice and clinical trials it is important to process those signals in an accurate and fast way and to provide the results as structured, consistent reports. Therefore, we developed a freely available and open-source framework in Java (http://www.eye.uni-tuebingen.de/project/idsI4sigproc). The framework is focused on an easy integration with existing applications. By leveraging well-established software patterns like pipes-and-filters and fluent interfaces as well as by designing the application programming interfaces (API) as an integrated domain specific language (DSL) the overall framework provides a smooth learning curve. Additionally, it already contains several processing methods and visualization features and can be extended easily by implementing the provided interfaces. In this way, not only can new processing methods be added but the framework can also be adopted for other areas of signal processing. This article describes in detail the structure and implementation of the framework and demonstrate its application through the software package used in clinical practice and clinical trials at the University Eye Hospital Tuebingen one of the largest departments in the field of visual electrophysiology in Europe.
Auditory biofeedback substitutes for loss of sensory information in maintaining stance.
Dozza, Marco; Horak, Fay B; Chiari, Lorenzo
2007-03-01
The importance of sensory feedback for postural control in stance is evident from the balance improvements occurring when sensory information from the vestibular, somatosensory, and visual systems is available. However, the extent to which also audio-biofeedback (ABF) information can improve balance has not been determined. It is also unknown why additional artificial sensory feedback is more effective for some subjects than others and in some environmental contexts than others. The aim of this study was to determine the relative effectiveness of an ABF system to reduce postural sway in stance in healthy control subjects and in subjects with bilateral vestibular loss, under conditions of reduced vestibular, visual, and somatosensory inputs. This ABF system used a threshold region and non-linear scaling parameters customized for each individual, to provide subjects with pitch and volume coding of their body sway. ABF had the largest effect on reducing the body sway of the subjects with bilateral vestibular loss when the environment provided limited visual and somatosensory information; it had the smallest effect on reducing the sway of subjects with bilateral vestibular loss, when the environment provided full somatosensory information. The extent that all subjects substituted ABF information for their loss of sensory information was related to the extent that each subject was visually dependent or somatosensory-dependent for their postural control. Comparison of postural sway under a variety of sensory conditions suggests that patients with profound bilateral loss of vestibular function show larger than normal information redundancy among the remaining senses and ABF of trunk sway. The results support the hypothesis that the nervous system uses augmented sensory information differently depending both on the environment and on individual proclivities to rely on vestibular, somatosensory or visual information to control sway.
Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery
NASA Astrophysics Data System (ADS)
Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.
2017-05-01
In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.
García-Lázaro, Santiago; Ferrer-Blasco, Teresa; Madrid-Costa, David; Albarrán-Diego, César; Montés-Micó, Robert
2015-01-01
To assess and compare the effects of four simultaneous-image multifocal contact lenses (SIMCLs), and those with distant-vision-only contact lenses on visual performance in early presbyopes, under dim conditions, including the effects of induced glare. In this double-masked crossover study design, 28 presbyopic subjects aged 40 to 46 years were included. All participants were fitted with the four different SIMCLs (Air Optix Aqua Multifocal [AOAM; Alcon], PureVision Multifocal [PM; Bausch & Lomb], Acuvue Oasys for Presbyopia [AOP; Johnson & Johnson Vision], and Biofinity Multifocal [BM; CooperVision]) and with monofocal contact lenses (Air Optix Aqua, Alcon). After 1 month of daily contact lens wearing, each subject's binocular distance visual acuity (BDVA) and binocular distance contrast sensitivity (BDCS) were measured using the Functional Visual Analyzer (Stereo Optical Co., Inc.) under mesopic conditions (3 candela [cd]/m) both with no glare and under the 2 levels of induced glare: 1.0 lux (glare 1) and 28 lux (glare 2). Among the SIMCLs, in terms of BDVA, AOAM and PM outperformed BM and AOP. All contact lenses performed better at level without glare, followed by Glare 1, and with the worst results obtained under glare 2. Binocular distance contrast sensitivity revealed statistically significant differences for 12 cycles per degree (cpd). Among the SIMCLs, post hoc multiple comparison testing revealed that AOAM and PM provided the best BDCS at the three luminance levels. In both cases, BDVA and BDCS at 12 cpd, monofocal contact lenses outperformed all SIMCL ones at all lighting conditions. Air Optix Aqua Multifocal and PM provided better visual performance than BM and AOP for distance vision with low addition and under dim conditions, but they all provide worse performance than monofocal contact lenses.
Laminar circuit organization and response modulation in mouse visual cortex
Olivas, Nicholas D.; Quintanar-Zilinskas, Victor; Nenadic, Zoran; Xu, Xiangmin
2012-01-01
The mouse has become an increasingly important animal model for visual system studies, but few studies have investigated local functional circuit organization of mouse visual cortex. Here we used our newly developed mapping technique combining laser scanning photostimulation (LSPS) with fast voltage-sensitive dye (VSD) imaging to examine the spatial organization and temporal dynamics of laminar circuit responses in living slice preparations of mouse primary visual cortex (V1). During experiments, LSPS using caged glutamate provided spatially restricted neuronal activation in a specific cortical layer, and evoked responses from the stimulated layer to its functionally connected regions were detected by VSD imaging. In this study, we first provided a detailed analysis of spatiotemporal activation patterns at specific V1 laminar locations and measured local circuit connectivity. Then we examined the role of cortical inhibition in the propagation of evoked cortical responses by comparing circuit activity patterns in control and in the presence of GABAa receptor antagonists. We found that GABAergic inhibition was critical in restricting layer-specific excitatory activity spread and maintaining topographical projections. In addition, we investigated how AMPA and NMDA receptors influenced cortical responses and found that blocking AMPA receptors abolished interlaminar functional projections, and the NMDA receptor activity was important in controlling visual cortical circuit excitability and modulating activity propagation. The NMDA receptor antagonist reduced neuronal population activity in time-dependent and laminar-specific manners. Finally, we used the quantitative information derived from the mapping experiments and presented computational modeling analysis of V1 circuit organization. Taken together, the present study has provided important new information about mouse V1 circuit organization and response modulation. PMID:23060751
Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.
Kim, Jeesun; Davis, Chris; Groot, Christopher
2009-12-01
This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.
Traffic Signs in Complex Visual Environments
DOT National Transportation Integrated Search
1982-11-01
The effects of sign luminance on detection and recognition of traffic control devices is mediated through contrast with the immediate surround. Additionally, complex visual scenes are known to degrade visual performance with targets well above visual...
Sequence alignment visualization in HTML5 without Java.
Gille, Christoph; Birgit, Weyand; Gille, Andreas
2014-01-01
Java has been extensively used for the visualization of biological data in the web. However, the Java runtime environment is an additional layer of software with an own set of technical problems and security risks. HTML in its new version 5 provides features that for some tasks may render Java unnecessary. Alignment-To-HTML is the first HTML-based interactive visualization for annotated multiple sequence alignments. The server side script interpreter can perform all tasks like (i) sequence retrieval, (ii) alignment computation, (iii) rendering, (iv) identification of a homologous structural models and (v) communication with BioDAS-servers. The rendered alignment can be included in web pages and is displayed in all browsers on all platforms including touch screen tablets. The functionality of the user interface is similar to legacy Java applets and includes color schemes, highlighting of conserved and variable alignment positions, row reordering by drag and drop, interlinked 3D visualization and sequence groups. Novel features are (i) support for multiple overlapping residue annotations, such as chemical modifications, single nucleotide polymorphisms and mutations, (ii) mechanisms to quickly hide residue annotations, (iii) export to MS-Word and (iv) sequence icons. Alignment-To-HTML, the first interactive alignment visualization that runs in web browsers without additional software, confirms that to some extend HTML5 is already sufficient to display complex biological data. The low speed at which programs are executed in browsers is still the main obstacle. Nevertheless, we envision an increased use of HTML and JavaScript for interactive biological software. Under GPL at: http://www.bioinformatics.org/strap/toHTML/.
Comparing object recognition from binary and bipolar edge images for visual prostheses.
Jung, Jae-Hyun; Pu, Tian; Peli, Eli
2016-11-01
Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition.
Cross-species 3D virtual reality toolbox for visual and cognitive experiments.
Doucet, Guillaume; Gulli, Roberto A; Martinez-Trujillo, Julio C
2016-06-15
Although simplified visual stimuli, such as dots or gratings presented on homogeneous backgrounds, provide strict control over the stimulus parameters during visual experiments, they fail to approximate visual stimulation in natural conditions. Adoption of virtual reality (VR) in neuroscience research has been proposed to circumvent this problem, by combining strict control of experimental variables and behavioral monitoring within complex and realistic environments. We have created a VR toolbox that maximizes experimental flexibility while minimizing implementation costs. A free VR engine (Unreal 3) has been customized to interface with any control software via text commands, allowing seamless introduction into pre-existing laboratory data acquisition frameworks. Furthermore, control functions are provided for the two most common programming languages used in visual neuroscience: Matlab and Python. The toolbox offers milliseconds time resolution necessary for electrophysiological recordings and is flexible enough to support cross-species usage across a wide range of paradigms. Unlike previously proposed VR solutions whose implementation is complex and time-consuming, our toolbox requires minimal customization or technical expertise to interface with pre-existing data acquisition frameworks as it relies on already familiar programming environments. Moreover, as it is compatible with a variety of display and input devices, identical VR testing paradigms can be used across species, from rodents to humans. This toolbox facilitates the addition of VR capabilities to any laboratory without perturbing pre-existing data acquisition frameworks, or requiring any major hardware changes. Copyright © 2016 Z. All rights reserved.
Giraud, Stéphanie; Brock, Anke M.; Macé, Marc J.-M.; Jouffrais, Christophe
2017-01-01
Special education teachers for visually impaired students rely on tools such as raised-line maps (RLMs) to teach spatial knowledge. These tools do not fully and adequately meet the needs of the teachers because they are long to produce, expensive, and not versatile enough to provide rapid updating of the content. For instance, the same RLM can barely be used during different lessons. In addition, those maps do not provide any interactivity, which reduces students’ autonomy. With the emergence of 3D printing and low-cost microcontrollers, it is now easy to design affordable interactive small-scale models (SSMs) which are adapted to the needs of special education teachers. However, no study has previously been conducted to evaluate non-visual learning using interactive SSMs. In collaboration with a specialized teacher, we designed a SSM and a RLM representing the evolution of the geography and history of a fictitious kingdom. The two conditions were compared in a study with 24 visually impaired students regarding the memorization of the spatial layout and historical contents. The study showed that the interactive SSM improved both space and text memorization as compared to the RLM with braille legend. In conclusion, we argue that affordable home-made interactive small scale models can improve learning for visually impaired students. Interestingly, they are adaptable to any teaching situation including students with specific needs. PMID:28649209
fMRI evidence for sensorimotor transformations in human cortex during smooth pursuit eye movements.
Kimmig, H; Ohlendorf, S; Speck, O; Sprenger, A; Rutschmann, R M; Haller, S; Greenlee, M W
2008-01-01
Smooth pursuit eye movements (SP) are driven by moving objects. The pursuit system processes the visual input signals and transforms this information into an oculomotor output signal. Despite the object's movement on the retina and the eyes' movement in the head, we are able to locate the object in space implying coordinate transformations from retinal to head and space coordinates. To test for the visual and oculomotor components of SP and the possible transformation sites, we investigated three experimental conditions: (I) fixation of a stationary target with a second target moving across the retina (visual), (II) pursuit of the moving target with the second target moving in phase (oculomotor), (III) pursuit of the moving target with the second target remaining stationary (visuo-oculomotor). Precise eye movement data were simultaneously measured with the fMRI data. Visual components of activation during SP were located in the motion-sensitive, temporo-parieto-occipital region MT+ and the right posterior parietal cortex (PPC). Motor components comprised more widespread activation in these regions and additional activations in the frontal and supplementary eye fields (FEF, SEF), the cingulate gyrus and precuneus. The combined visuo-oculomotor stimulus revealed additional activation in the putamen. Possible transformation sites were found in MT+ and PPC. The MT+ activation evoked by the motion of a single visual dot was very localized, while the activation of the same single dot motion driving the eye was rather extended across MT+. The eye movement information appeared to be dispersed across the visual map of MT+. This could be interpreted as a transfer of the one-dimensional eye movement information into the two-dimensional visual map. Potentially, the dispersed information could be used to remap MT+ to space coordinates rather than retinal coordinates and to provide the basis for a motor output control. A similar interpretation holds for our results in the PPC region.
Lan, Shu-Ling; Chen, Yu-Chi; Chang, Hsiu-Ju
2018-06-01
The aim of this paper was to describe the nursing application of mirror visual feedback in a patient suffering from long-term visual hallucinations. The intervention period was from May 15th to October 19th, 2015. Using the five facets of psychiatric nursing assessment, several health problems were observed, including disturbed sensory perceptions (prominent visual hallucinations) and poor self-care (e.g. limited abilities to self-bathe and put on clothing). Furthermore, "caregiver role strain" due to the related intense care burden was noted. After building up a therapeutic interpersonal relationship, the technique of brain plasticity and mirror visual feedback were performed using multiple nursing care methods in order to help the patient suppress her visual hallucinations by enhancing a different visual stimulus. We also taught her how to cope with visual hallucinations in a proper manner. The frequency and content of visual hallucinations were recorded to evaluate the effects of management. The therapeutic plan was formulated together with the patient in order to boost her self-confidence, and a behavior contract was implemented in order to improve her personal hygiene. In addition, psychoeducation on disease-related topics was provided to the patient's family, and they were encouraged to attend relevant therapeutic activities. As a result, her family became less passive and negative and more engaged in and positive about her future. The crisis of "caregiver role strain" was successfully resolved. The current experience is hoped to serve as a model for enhancing communication and cooperation between family and staff in similar medical settings.
Visualization of the Construction of Ancient Roman Buildings in Ostia Using Point Cloud Data
NASA Astrophysics Data System (ADS)
Hori, Y.; Ogawa, T.
2017-02-01
The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we "skipped" many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.
Examining Chemistry Students Visual-Perceptual Skills Using the VSCS tool and Interview Data
NASA Astrophysics Data System (ADS)
Christian, Caroline
The Visual-Spatial Chemistry Specific (VSCS) assessment tool was developed to test students' visual-perceptual skills, which are required to form a mental image of an object. The VSCS was designed around the theoretical framework of Rochford and Archer that provides eight distinct and well-defined visual-perceptual skills with identified problems students might have with each skill set. Factor analysis was used to analyze the results during the validation process of the VSCS. Results showed that the eight factors could not be separated from each other, but instead two factors emerged as significant to the data. These two factors have been defined and described as a general visual-perceptual skill (factor 1) and a skill that adds on a second level of complexity by involving multiple viewpoints such as changing frames of reference. The questions included in the factor analysis were bolstered by the addition of an item response theory (IRT) analysis. Interviews were also conducted with twenty novice students to test face validity of the tool, and to document student approaches at solving visualization problems of this type. Students used five main physical resources or processes to solve the questions, but the resource that was the most successful was handling or building a physical representation of an object.
Characterization of Visual Scanning Patterns in Air Traffic Control
McClung, Sarah N.; Kang, Ziho
2016-01-01
Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190
On the Treatment of Field Quantities and Elemental Continuity in FEM Solutions.
Jallepalli, Ashok; Docampo-Sanchez, Julia; Ryan, Jennifer K; Haimes, Robert; Kirby, Robert M
2018-01-01
As the finite element method (FEM) and the finite volume method (FVM), both traditional and high-order variants, continue their proliferation into various applied engineering disciplines, it is important that the visualization techniques and corresponding data analysis tools that act on the results produced by these methods faithfully represent the underlying data. To state this in another way: the interpretation of data generated by simulation needs to be consistent with the numerical schemes that underpin the specific solver technology. As the verifiable visualization literature has demonstrated: visual artifacts produced by the introduction of either explicit or implicit data transformations, such as data resampling, can sometimes distort or even obfuscate key scientific features in the data. In this paper, we focus on the handling of elemental continuity, which is often only continuous or piecewise discontinuous, when visualizing primary or derived fields from FEM or FVM simulations. We demonstrate that traditional data handling and visualization of these fields introduce visual errors. In addition, we show how the use of the recently proposed line-SIAC filter provides a way of handling elemental continuity issues in an accuracy-conserving manner with the added benefit of casting the data in a smooth context even if the representation is element discontinuous.
Hager, Audrey M; Dringenberg, Hans C
2012-12-01
The rat visual system is structured such that the large (>90 %) majority of retinal ganglion axons reach the contralateral lateral geniculate nucleus (LGN) and visual cortex (V1). This anatomical design allows for the relatively selective activation of one cerebral hemisphere under monocular viewing conditions. Here, we describe the design of a harness and face mask allowing simple and noninvasive monocular occlusion in rats. The harness is constructed from synthetic fiber (shoelace-type material) and fits around the girth region and neck, allowing for easy adjustments to fit rats of various weights. The face mask consists of soft rubber material that is attached to the harness by Velcro strips. Eyeholes in the mask can be covered by additional Velcro patches to occlude either one or both eyes. Rats readily adapt to wearing the device, allowing behavioral testing under different types of viewing conditions. We show that rats successfully acquire a water-maze-based visual discrimination task under monocular viewing conditions. Following task acquisition, interocular transfer was assessed. Performance with the previously occluded, "untrained" eye was impaired, suggesting that training effects were partially confined to one cerebral hemisphere. The method described herein provides a simple and noninvasive means to restrict visual input for studies of visual processing and learning in various rodent species.
BlockLogo: visualization of peptide and sequence motif conservation
Olsen, Lars Rønn; Kudahl, Ulrich Johan; Simon, Christian; Sun, Jing; Schönbach, Christian; Reinherz, Ellis L.; Zhang, Guang Lan; Brusic, Vladimir
2013-01-01
BlockLogo is a web-server application for visualization of protein and nucleotide fragments, continuous protein sequence motifs, and discontinuous sequence motifs using calculation of block entropy from multiple sequence alignments. The user input consists of a multiple sequence alignment, selection of motif positions, type of sequence, and output format definition. The output has BlockLogo along with the sequence logo, and a table of motif frequencies. We deployed BlockLogo as an online application and have demonstrated its utility through examples that show visualization of T-cell epitopes and B-cell epitopes (both continuous and discontinuous). Our additional example shows a visualization and analysis of structural motifs that determine specificity of peptide binding to HLA-DR molecules. The BlockLogo server also employs selected experimentally validated prediction algorithms to enable on-the-fly prediction of MHC binding affinity to 15 common HLA class I and class II alleles as well as visual analysis of discontinuous epitopes from multiple sequence alignments. It enables the visualization and analysis of structural and functional motifs that are usually described as regular expressions. It provides a compact view of discontinuous motifs composed of distant positions within biological sequences. BlockLogo is available at: http://research4.dfci.harvard.edu/cvc/blocklogo/ and http://methilab.bu.edu/blocklogo/ PMID:24001880
Characterization of Visual Scanning Patterns in Air Traffic Control.
McClung, Sarah N; Kang, Ziho
2016-01-01
Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.
Sheffield, Benjamin M; Schuchman, Gerald; Bernstein, Joshua G W
2015-01-01
As cochlear implant (CI) acceptance increases and candidacy criteria are expanded, these devices are increasingly recommended for individuals with less than profound hearing loss. As a result, many individuals who receive a CI also retain acoustic hearing, often in the low frequencies, in the nonimplanted ear (i.e., bimodal hearing) and in some cases in the implanted ear (i.e., hybrid hearing) which can enhance the performance achieved by the CI alone. However, guidelines for clinical decisions pertaining to cochlear implantation are largely based on expectations for postsurgical speech-reception performance with the CI alone in auditory-only conditions. A more comprehensive prediction of postimplant performance would include the expected effects of residual acoustic hearing and visual cues on speech understanding. An evaluation of auditory-visual performance might be particularly important because of the complementary interaction between the speech information relayed by visual cues and that contained in the low-frequency auditory signal. The goal of this study was to characterize the benefit provided by residual acoustic hearing to consonant identification under auditory-alone and auditory-visual conditions for CI users. Additional information regarding the expected role of residual hearing in overall communication performance by a CI listener could potentially lead to more informed decisions regarding cochlear implantation, particularly with respect to recommendations for or against bilateral implantation for an individual who is functioning bimodally. Eleven adults 23 to 75 years old with a unilateral CI and air-conduction thresholds in the nonimplanted ear equal to or better than 80 dB HL for at least one octave frequency between 250 and 1000 Hz participated in this study. Consonant identification was measured for conditions involving combinations of electric hearing (via the CI), acoustic hearing (via the nonimplanted ear), and speechreading (visual cues). The results suggest that the benefit to CI consonant-identification performance provided by the residual acoustic hearing is even greater when visual cues are also present. An analysis of consonant confusions suggests that this is because the voicing cues provided by the residual acoustic hearing are highly complementary with the mainly place-of-articulation cues provided by the visual stimulus. These findings highlight the need for a comprehensive prediction of trimodal (acoustic, electric, and visual) postimplant speech-reception performance to inform implantation decisions. The increased influence of residual acoustic hearing under auditory-visual conditions should be taken into account when considering surgical procedures or devices that are intended to preserve acoustic hearing in the implanted ear. This is particularly relevant when evaluating the candidacy of a current bimodal CI user for a second CI (i.e., bilateral implantation). Although recent developments in CI technology and surgical techniques have increased the likelihood of preserving residual acoustic hearing, preservation cannot be guaranteed in each individual case. Therefore, the potential gain to be derived from bilateral implantation needs to be weighed against the possible loss of the benefit provided by residual acoustic hearing.
An Empirical Study on Using Visual Embellishments in Visualization.
Borgo, R; Abdul-Rahman, A; Mohamed, F; Grant, P W; Reppa, I; Floridi, L; Chen, Min
2012-12-01
In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces "divided attention", and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.
A simpler primate brain: the visual system of the marmoset monkey
Solomon, Samuel G.; Rosa, Marcello G. P.
2014-01-01
Humans are diurnal primates with high visual acuity at the center of gaze. Although primates share many similarities in the organization of their visual centers with other mammals, and even other species of vertebrates, their visual pathways also show unique features, particularly with respect to the organization of the cerebral cortex. Therefore, in order to understand some aspects of human visual function, we need to study non-human primate brains. Which species is the most appropriate model? Macaque monkeys, the most widely used non-human primates, are not an optimal choice in many practical respects. For example, much of the macaque cerebral cortex is buried within sulci, and is therefore inaccessible to many imaging techniques, and the postnatal development and lifespan of macaques are prohibitively long for many studies of brain maturation, plasticity, and aging. In these and several other respects the marmoset, a small New World monkey, represents a more appropriate choice. Here we review the visual pathways of the marmoset, highlighting recent work that brings these advantages into focus, and identify where additional work needs to be done to link marmoset brain organization to that of macaques and humans. We will argue that the marmoset monkey provides a good subject for studies of a complex visual system, which will likely allow an important bridge linking experiments in animal models to humans. PMID:25152716
Comprehensive visual field test & diagnosis system in support of astronaut health and performance
NASA Astrophysics Data System (ADS)
Fink, Wolfgang; Clark, Jonathan B.; Reisman, Garrett E.; Tarbell, Mark A.
Long duration spaceflight, permanent human presence on the Moon, and future human missions to Mars will require autonomous medical care to address both expected and unexpected risks. An integrated non-invasive visual field test & diagnosis system is presented for the identification, characterization, and automated classification of visual field defects caused by the spaceflight environment. This system will support the onboard medical provider and astronauts on space missions with an innovative, non-invasive, accurate, sensitive, and fast visual field test. It includes a database for examination data, and a software package for automated visual field analysis and diagnosis. The system will be used to detect and diagnose conditions affecting the visual field, while in space and on Earth, permitting the timely application of therapeutic countermeasures before astronaut health or performance are impaired. State-of-the-art perimetry devices are bulky, thereby precluding application in a spaceflight setting. In contrast, the visual field test & diagnosis system requires only a touchscreen-equipped computer or touchpad device, which may already be in use for other purposes (i.e., no additional payload), and custom software. The system has application in routine astronaut assessment (Clinical Status Exam), pre-, in-, and post-flight monitoring, and astronaut selection. It is deployable in operational space environments, such as aboard the International Space Station or during future missions to or permanent presence on the Moon and Mars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric D.; Bowman, Doug A.; Kopper, Regis
Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. The paper presents a framework for evaluating the effects of virtual reality fidelity based on an analysis of a simulation’s display, interaction, and scenario components. Following this framework, we conducted a controlled experiment to test the effects of fidelity on training effectiveness for a visual scanning task. The experiment varied the levels of field of view and visual realism during a training phase and then evaluated scanning performance with the simulator’s highestmore » level of fidelity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual realism significantly affected target detection during training; higher field of view led to better performance and higher visual realism worsened performance. Additionally, the level of visual realism during training significantly affected learning of the prescribed visual scanning strategy, providing evidence that high visual realism was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training—evaluation in a more realistic setting may be necessary.« less
van den Broek, Ellen G C; van Eijden, Ans J P M; Overbeek, Mathilde M; Kef, Sabina; Sterkenburg, Paula S; Schuengel, Carlo
2017-01-01
Secure parent-child attachment may help children to overcome the challenges of growing up with a visual or visual-and-intellectual impairment. A large literature exists that provides a blueprint for interventions that promote parental sensitivity and secure attachment. The Video-feedback Intervention to promote Positive Parenting (VIPP) is based on that blueprint. While it has been adapted to several specific at risk populations, children with visual impairment may require additional adjustments. This study aimed to identify the themes that should be addressed in adapting VIPP and similar interventions. A Delphi-consultation was conducted with 13 professionals in the field of visual impairment to select the themes for relationship-focused intervention. These themes informed a systematic literature search. Interaction, intersubjectivity, joint attention, exploration, play and specific behavior were the themes mentioned in the Delphi-group. Paired with visual impairment or vision disorders, infants or young children (and their parents) the search yielded 74 articles, making the six themes for intervention adaptation more specific and concrete. The rich literature on six visual impairment specific themes was dominated by the themes interaction, intersubjectivity, and joint attention. These themes need to be addressed in adapting intervention programs developed for other populations, such as VIPP which currently focuses on higher order constructs of sensitivity and attachment.
Krajcovicova, Lenka; Mikl, Michal; Marecek, Radek; Rektorova, Irena
2014-01-01
Changes in connectivity of the posterior node of the default mode network (DMN) were studied when switching from baseline to a cognitive task using functional magnetic resonance imaging. In all, 15 patients with mild to moderate Alzheimer's disease (AD) and 18 age-, gender-, and education-matched healthy controls (HC) participated in the study. Psychophysiological interactions analysis was used to assess the specific alterations in the DMN connectivity (deactivation-based) due to psychological effects from the complex visual scene encoding task. In HC, we observed task-induced connectivity decreases between the posterior cingulate and middle temporal and occipital visual cortices. These findings imply successful involvement of the ventral visual pathway during the visual processing in our HC cohort. In AD, involvement of the areas engaged in the ventral visual pathway was observed only in a small volume of the right middle temporal gyrus. Additional connectivity changes (decreases) in AD were present between the posterior cingulate and superior temporal gyrus when switching from baseline to task condition. These changes are probably related to both disturbed visual processing and the DMN connectivity in AD and reflect deficits and compensatory mechanisms within the large scale brain networks in this patient population. Studying the DMN connectivity using psychophysiological interactions analysis may provide a sensitive tool for exploring early changes in AD and their dynamics during the disease progression.
NASA Astrophysics Data System (ADS)
Niemeijer, Sander
2017-04-01
The ESA Atmospheric Toolbox (BEAT) is one of the ESA Sentinel Toolboxes. It consists of a set of software components to read, analyze, and visualize a wide range of atmospheric data products. In addition to the upcoming Sentinel-5P mission it supports a wide range of other atmospheric data products, including those of previous ESA missions, ESA Third Party missions, Copernicus Atmosphere Monitoring Service (CAMS), ground based data, etc. The toolbox consists of three main components that are called CODA, HARP and VISAN. CODA provides interfaces for direct reading of data from earth observation data files. These interfaces consist of command line applications, libraries, direct interfaces to scientific applications (IDL and MATLAB), and direct interfaces to programming languages (C, Fortran, Python, and Java). CODA provides a single interface to access data in a wide variety of data formats, including ASCII, binary, XML, netCDF, HDF4, HDF5, CDF, GRIB, RINEX, and SP3. HARP is a toolkit for reading, processing and inter-comparing satellite remote sensing data, model data, in-situ data, and ground based remote sensing data. The main goal of HARP is to assist in the inter-comparison of datasets. By appropriately chaining calls to HARP command line tools one can pre-process datasets such that two datasets that need to be compared end up having the same temporal/spatial grid, same data format/structure, and same physical unit. The toolkit comes with its own data format conventions, the HARP format, which is based on netcdf/HDF. Ingestion routines (based on CODA) allow conversion from a wide variety of atmospheric data products to this common format. In addition, the toolbox provides a wide range of operations to perform conversions on the data such as unit conversions, quantity conversions (e.g. number density to volume mixing ratios), regridding, vertical smoothing using averaging kernels, collocation of two datasets, etc. VISAN is a cross-platform visualization and analysis application for atmospheric data and can be used to visualize and analyze the data that you retrieve using the CODA and HARP interfaces. The application uses the Python language as the means through which you provide commands to the application. The Python interfaces for CODA and HARP are included so you can directly ingest product data from within VISAN. Powerful visualization functionality for 2D plots and geographical plots in VISAN will allow you to directly visualize the ingested data. All components from the ESA Atmospheric Toolbox are Open Source and freely available. Software packages can be downloaded from the BEAT website: http://stcorp.nl/beat/
Visual short-term memory load reduces retinotopic cortex response to contrast.
Konstantinou, Nikos; Bahrami, Bahador; Rees, Geraint; Lavie, Nilli
2012-11-01
Load Theory of attention suggests that high perceptual load in a task leads to reduced sensory visual cortex response to task-unrelated stimuli resulting in "load-induced blindness" [e.g., Lavie, N. Attention, distraction and cognitive control under load. Current Directions in Psychological Science, 19, 143-148, 2010; Lavie, N. Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75-82, 2005]. Consideration of the findings that visual STM (VSTM) involves sensory recruitment [e.g., Pasternak, T., & Greenlee, M. Working memory in primate sensory systems. Nature Reviews Neuroscience, 6, 97-107, 2005] within Load Theory led us to a new hypothesis regarding the effects of VSTM load on visual processing. If VSTM load draws on sensory visual capacity, then similar to perceptual load, high VSTM load should also reduce visual cortex response to incoming stimuli leading to a failure to detect them. We tested this hypothesis with fMRI and behavioral measures of visual detection sensitivity. Participants detected the presence of a contrast increment during the maintenance delay in a VSTM task requiring maintenance of color and position. Increased VSTM load (manipulated by increased set size) led to reduced retinotopic visual cortex (V1-V3) responses to contrast as well as reduced detection sensitivity, as we predicted. Additional visual detection experiments established a clear tradeoff between the amount of information maintained in VSTM and detection sensitivity, while ruling out alternative accounts for the effects of VSTM load in terms of differential spatial allocation strategies or task difficulty. These findings extend Load Theory to demonstrate a new form of competitive interactions between early visual cortex processing and visual representations held in memory under load and provide a novel line of support for the sensory recruitment hypothesis of VSTM.
Developing Guidelines for Assessing Visual Analytics Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean
2011-07-01
In this paper, we develop guidelines for evaluating visual analytic environments based on a synthesis of reviews for the entries to the 2009 Visual Analytics Science and Technology (VAST) Symposium Challenge and from a user study with professional intelligence analysts. By analyzing the 2009 VAST Challenge reviews we gained a better understanding of what is important to our reviewers, both visualization researchers and professional analysts. We also report on a small user study with professional analysts to determine the important factors that they use in evaluating visual analysis systems. We then looked at guidelines developed by researchers in various domainsmore » and synthesized these into an initial set for use by others in the community. In a second part of the user study, we looked at guidelines for a new aspect of visual analytic systems – the generation of reports. Future visual analytic systems have been challenged to help analysts generate their reports. In our study we worked with analysts to understand the criteria they used to evaluate the quality of analytic reports. We propose that this knowledge will be useful as researchers look at systems to automate some of the report generation.1 Based on these efforts, we produced some initial guidelines for evaluating visual analytic environment and for evaluation of analytic reports. It is important to understand that these guidelines are initial drafts and are limited in scope because of the type of tasks for which the visual analytic systems used in the studies in this paper were designed. More research and refinement is needed by the Visual Analytics Community to provide additional evaluation guidelines for different types of visual analytic environments.« less
Assessing natural hazard risk using images and data
NASA Astrophysics Data System (ADS)
Mccullough, H. L.; Dunbar, P. K.; Varner, J. D.; Mungov, G.
2012-12-01
Photographs and other visual media provide valuable pre- and post-event data for natural hazard assessment. Scientific research, mitigation, and forecasting rely on visual data for risk analysis, inundation mapping and historic records. Instrumental data only reveal a portion of the whole story; photographs explicitly illustrate the physical and societal impacts from the event. Visual data is rapidly increasing as the availability of portable high resolution cameras and video recorders becomes more attainable. Incorporating these data into archives ensures a more complete historical account of events. Integrating natural hazards data, such as tsunami, earthquake and volcanic eruption events, socio-economic information, and tsunami deposits and runups along with images and photographs enhances event comprehension. Global historic databases at NOAA's National Geophysical Data Center (NGDC) consolidate these data, providing the user with easy access to a network of information. NGDC's Natural Hazards Image Database (ngdc.noaa.gov/hazardimages) was recently improved to provide a more efficient and dynamic user interface. It uses the Google Maps API and Keyhole Markup Language (KML) to provide geographic context to the images and events. Descriptive tags, or keywords, have been applied to each image, enabling easier navigation and discovery. In addition, the Natural Hazards Map Viewer (maps.ngdc.noaa.gov/viewers/hazards) provides the ability to search and browse data layers on a Mercator-projection globe with a variety of map backgrounds. This combination of features creates a simple and effective way to enhance our understanding of hazard events and risks using imagery.
Task–Technology Fit of Video Telehealth for Nurses in an Outpatient Clinic Setting
Finkelstein, Stanley M.
2014-01-01
Abstract Background: Incorporating telehealth into outpatient care delivery supports management of consumer health between clinic visits. Task–technology fit is a framework for understanding how technology helps and/or hinders a person during work processes. Evaluating the task–technology fit of video telehealth for personnel working in a pediatric outpatient clinic and providing care between clinic visits ensures the information provided matches the information needed to support work processes. Materials and Methods: The workflow of advanced practice registered nurse (APRN) care coordination provided via telephone and video telehealth was described and measured using a mixed-methods workflow analysis protocol that incorporated cognitive ethnography and time–motion study. Qualitative and quantitative results were merged and analyzed within the task–technology fit framework to determine the workflow fit of video telehealth for APRN care coordination. Results: Incorporating video telehealth into APRN care coordination workflow provided visual information unavailable during telephone interactions. Despite additional tasks and interactions needed to obtain the visual information, APRN workflow efficiency, as measured by time, was not significantly changed. Analyzed within the task–technology fit framework, the increased visual information afforded by video telehealth supported the assessment and diagnostic information needs of the APRN. Conclusions: Telehealth must provide the right information to the right clinician at the right time. Evaluating task–technology fit using a mixed-methods protocol ensured rigorous analysis of fit within work processes and identified workflows that benefit most from the technology. PMID:24841219
NASA Technical Reports Server (NTRS)
1977-01-01
Data from visual observations are integrated with results of analyses of approxmately 600 of the nearly 2000 photographs taken of Earth during the 84-day Skylab 4 mission to provide additional information on (1) Earth features and processes; (2) operational procedures and constraints in observing and photographing the planet; and (3) the use of man in real-time analysis of oceanic and atmospheric phenomena.
Gene therapy for red-green colour blindness in adult primates
Mancuso, Katherine; Hauswirth, William W.; Li, Qiuhong; Connor, Thomas B.; Kuchenbecker, James A.; Mauck, Matthew C.; Neitz, Jay; Neitz, Maureen
2009-01-01
Red-green colour blindness, which results from the absence of either the long- (L) or middle- (M) wavelength-sensitive visual photopigments, is the most common single locus genetic disorder. Here, the possibility of curing colour blindness using gene therapy was explored in experiments on adult monkeys that had been colour blind since birth. A third type of cone pigment was added to dichromatic retinas, providing the receptoral basis for trichromatic colour vision. This opened a new avenue to explore the requirements for establishing the neural circuits for a new dimension of colour sensation. Classic visual deprivation experiments1 have led to the expectation that neural connections established during development would not appropriately process an input that was not present from birth. Therefore, it was believed that treatment of congenital vision disorders would be ineffective unless administered to the very young. Here, however, addition of a third opsin in adult red-green colour-deficient primates was sufficient to produce trichromatic colour vision behaviour. Thus, trichromacy can arise from a single addition of a third cone class and it does not require an early developmental process. This provides a positive outlook for the potential of gene therapy to cure adult vision disorders. PMID:19759534
Gene therapy for red-green colour blindness in adult primates.
Mancuso, Katherine; Hauswirth, William W; Li, Qiuhong; Connor, Thomas B; Kuchenbecker, James A; Mauck, Matthew C; Neitz, Jay; Neitz, Maureen
2009-10-08
Red-green colour blindness, which results from the absence of either the long- (L) or the middle- (M) wavelength-sensitive visual photopigments, is the most common single locus genetic disorder. Here we explore the possibility of curing colour blindness using gene therapy in experiments on adult monkeys that had been colour blind since birth. A third type of cone pigment was added to dichromatic retinas, providing the receptoral basis for trichromatic colour vision. This opened a new avenue to explore the requirements for establishing the neural circuits for a new dimension of colour sensation. Classic visual deprivation experiments have led to the expectation that neural connections established during development would not appropriately process an input that was not present from birth. Therefore, it was believed that the treatment of congenital vision disorders would be ineffective unless administered to the very young. However, here we show that the addition of a third opsin in adult red-green colour-deficient primates was sufficient to produce trichromatic colour vision behaviour. Thus, trichromacy can arise from a single addition of a third cone class and it does not require an early developmental process. This provides a positive outlook for the potential of gene therapy to cure adult vision disorders.
Gestalt perception modulates early visual processing.
Herrmann, C S; Bosch, V
2001-04-17
We examined whether early visual processing reflects perceptual properties of a stimulus in addition to physical features. We recorded event-related potentials (ERPs) of 13 subjects in a visual classification task. We used four different stimuli which were all composed of four identical elements. One of the stimuli constituted an illusory Kanizsa square, another was composed of the same number of collinear line segments but the elements did not form a Gestalt. In addition, a target and a control stimulus were used which were arranged differently. These stimuli allow us to differentiate the processing of colinear line elements (stimulus features) and illusory figures (perceptual properties). The visual N170 in response to the illusory figure was significantly larger as compared to the other collinear stimulus. This is taken to indicate that the visual N170 reflects cognitive processes of Gestalt perception in addition to attentional processes and physical stimulus properties.
Practical life log video indexing based on content and context
NASA Astrophysics Data System (ADS)
Tancharoen, Datchakorn; Yamasaki, Toshihiko; Aizawa, Kiyoharu
2006-01-01
Today, multimedia information has gained an important role in daily life and people can use imaging devices to capture their visual experiences. In this paper, we present our personal Life Log system to record personal experiences in form of wearable video and environmental data; in addition, an efficient retrieval system is demonstrated to recall the desirable media. We summarize the practical video indexing techniques based on Life Log content and context to detect talking scenes by using audio/visual cues and semantic key frames from GPS data. Voice annotation is also demonstrated as a practical indexing method. Moreover, we apply body media sensors to record continuous life style and use body media data to index the semantic key frames. In the experiments, we demonstrated various video indexing results which provided their semantic contents and showed Life Log visualizations to examine personal life effectively.
NASA Astrophysics Data System (ADS)
Petruse, Radu Emanuil; Batâr, Sergiu; Cojan, Adela; Maniţiu, Ioan
2014-11-01
Coronary computed tomography angiography (CCTA) allows coronary artery visualization and the detection of coronary stenoses. In addition; it has been suggested as a novel, noninvasive modality for coronary atherosclerotic plaque detection, characterization, and quantification. Accurate identification of coronary plaques is challenging, especially for the noncalcified plaques, due to many factors such as the small size of coronary arteries, reconstruction artifacts caused by irregular heartbeats, beam hardening, and partial volume averaging. The development of 16, 32, 64 and the latest 320 row multidetector CT not only increases the spatial and the temporal resolution significantly, but also increases the number of images to be interpreted by radiologists substantially. Radiologists have to visually examine each coronary artery for suspicious stenosis using visualization tools such as multiplanar reformatting (MPR) and curved planar reformatting (CPR) provided by the review workstation in clinical practice
Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.
Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi
2017-07-01
We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.
Visualization of the separation and subsequent transition near the leading edge of airfoils
NASA Technical Reports Server (NTRS)
Arena, A. V.; Mueller, T. J.
1978-01-01
A visual study was performed using the low speed smoke wind tunnels with the objective of obtaining a better understanding of the structure of leading edge separation bubbles on airfoils. The location of separation, transition and reattachment for a cylindrical nose constant-thickness airfoil model were obtained from smoke photographs and surface oil flow techniques. These data, together with static pressure distributions along the leading edge and upper surface of the model, produced the influence of Reynolds number, angle of attack, and trailing edge flap angle on the size and characteristics of the bubble. Additional visual insight into the unsteady nature of the separation bubble was provided by high speed 16 mm movies. The 8 mm color movies taken of the surface oil flow supported the findings of the high speed movies and clearly showed the formation of a scalloped spanwise separation line at the higher Reynolds number.
Bosworth, Rain G.; Petrich, Jennifer A.; Dobkins, Karen R.
2012-01-01
In order to investigate differences in the effects of spatial attention between the left visual field (LVF) and the right visual field (RVF), we employed a full/poor attention paradigm using stimuli presented in the LVF vs. RVF. In addition, to investigate differences in the effects of spatial attention between the Dorsal and Ventral processing streams, we obtained motion thresholds (motion coherence thresholds and fine direction discrimination thresholds) and orientation thresholds, respectively. The results of this study showed negligible effects of attention on the orientation task, in either the LVF or RVF. In contrast, for both motion tasks, there was a significant effect of attention in the LVF, but not in the RVF. These data provide psychophysical evidence for greater effects of spatial attention in the LVF/right hemisphere, specifically, for motion processing in the Dorsal stream. PMID:22051893
[Spectral sensitivity and visual pigments of the coastal crab Hemigrapsus sanguineus].
Shukoliukov, S A; Zak, P P; Kalamkarov, G R; Kalishevich, O O; Ostrovskiĭ, M A
1980-01-01
It has been shown that the compound eye of the coastal crab has one photosensitive pigment rhodopsin and screening pigments, black and orange one. The orange pigment has lambda max = 480 nm, rhodopsin in digitonin is stable towards hydroxylamin action, has lambda max = 490-495 nm and after bleaching is transformed into free retinene and opsin. The pigments with lambda max = 430 and 475 nm of the receptor part of the eye are also solubilized. These pigments are not photosensitive but they dissociate under the effect of hydroxylamine. The curye of spectral sensitivity of the coastal crab has the basic maximum at approximately 525 nm and the additional one at 450 nm, which seems to be provided by a combination of the visual pigment--rhodopsin (lambda max 500 nm) with a carotinoid filter (lambda max 480-490). Specific features of the visual system of coastal crab are discussed.
A WebGIS-based system for analyzing and visualizing air quality data for Shanghai Municipality
NASA Astrophysics Data System (ADS)
Wang, Manyi; Liu, Chaoshun; Gao, Wei
2014-10-01
An online visual analytical system based on Java Web and WebGIS for air quality data for Shanghai Municipality was designed and implemented to quantitatively analyze and qualitatively visualize air quality data. By analyzing the architecture of WebGIS and Java Web, we firstly designed the overall scheme for system architecture, then put forward the software and hardware environment and also determined the main function modules for the system. The visual system was ultimately established with the DIV + CSS layout method combined with JSP, JavaScript, and some other computer programming languages based on the Java programming environment. Moreover, Struts, Spring, and Hibernate frameworks (SSH) were integrated in the system for the purpose of easy maintenance and expansion. To provide mapping service and spatial analysis functions, we selected ArcGIS for Server as the GIS server. We also used Oracle database and ESRI file geodatabase to store spatial data and non-spatial data in order to ensure the data security. In addition, the response data from the Web server are resampled to implement rapid visualization through the browser. The experimental successes indicate that this system can quickly respond to user's requests, and efficiently return the accurate processing results.
Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich; Waddell, Paul
2009-01-01
Urban simulation models and their visualization are used to help regional planning agencies evaluate alternative transportation investments, land use regulations, and environmental protection policies. Typical urban simulations provide spatially distributed data about number of inhabitants, land prices, traffic, and other variables. In this article, we build on a synergy of urban simulation, urban visualization, and computer graphics to automatically infer an urban layout for any time step of the simulation sequence. In addition to standard visualization tools, our method gathers data of the original street network, parcels, and aerial imagery and uses the available simulation results to infer changes to the original urban layout and produce a new and plausible layout for the simulation results. In contrast with previous work, our approach automatically updates the layout based on changes in the simulation data and thus can scale to a large simulation over many years. The method in this article offers a substantial step forward in building integrated visualization and behavioral simulation systems for use in community visioning, planning, and policy analysis. We demonstrate our method on several real cases using a 200 GB database for a 16,300 km2 area surrounding Seattle.
Birkett, Emma E; Talcott, Joel B
2012-01-01
Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.
Connecting Swath Satellite Data With Imagery in Mapping Applications
NASA Astrophysics Data System (ADS)
Thompson, C. K.; Hall, J. R.; Penteado, P. F.; Roberts, J. T.; Zhou, A. Y.
2016-12-01
Visualizations of gridded science data products (referred to as Level 3 or Level 4) typically provide a straightforward correlation between image pixels and the source science data. This direct relationship allows users to make initial inferences based on imagery values, facilitating additional operations on the underlying data values, such as data subsetting and analysis. However, that same pixel-to-data relationship for ungridded science data products (referred to as Level 2) is significantly more challenging. These products, also referred to as "swath products", are in orbital "instrument space" and raster visualization pixels do not directly correlate to science data values. Interpolation algorithms are often employed during the gridding or projection of a science dataset prior to image generation, introducing intermediary values that separate the image from the source data values. NASA's Global Imagery Browse Services (GIBS) is researching techniques for efficiently serving "image-ready" data allowing client-side dynamic visualization and analysis capabilities. This presentation will cover some GIBS prototyping work designed to maintain connectivity between Level 2 swath data and its corresponding raster visualizations. Specifically, we discuss the DAta-to-Image-SYstem (DAISY), an indexing approach for Level 2 swath data, and the mechanisms whereby a client may dynamically visualize the data in raster form.
Intelligent Visualization of Geo-Information on the Future Web
NASA Astrophysics Data System (ADS)
Slusallek, P.; Jochem, R.; Sons, K.; Hoffmann, H.
2012-04-01
Visualization is a key component of the "Observation Web" and will become even more important in the future as geo data becomes more widely accessible. The common statement that "Data that cannot be seen, does not exist" is especially true for non-experts, like most citizens. The Web provides the most interesting platform for making data easily and widely available. However, today's Web is not well suited for the interactive visualization and exploration that is often needed for geo data. Support for 3D data was added only recently and at an extremely low level (WebGL), but even the 2D visualization capabilities of HTML e.g. (images, canvas, SVG) are rather limited, especially regarding interactivity. We have developed XML3D as an extension to HTML-5. It allows for compactly describing 2D and 3D data directly as elements of an HTML-5 document. All graphics elements are part of the Document Object Model (DOM) and can be manipulated via the same set of DOM events and methods that millions of Web developers use on a daily basis. Thus, XML3D makes highly interactive 2D and 3D visualization easily usable, not only for geo data. XML3D is supported by any WebGL-capable browser but we also provide native implementations in Firefox and Chromium. As an example, we show how OpenStreetMap data can be mapped directly to XML3D and visualized interactively in any Web page. We show how this data can be easily augmented with additional data from the Web via a few lines of Javascript. We also show how embedded semantic data (via RDFa) allows for linking the visualization back to the data's origin, thus providing an immersive interface for interacting with and modifying the original data. XML3D is used as key input for standardization within the W3C Community Group on "Declarative 3D for the Web" chaired by the DFKI and has recently been selected as one of the Generic Enabler for the EU Future Internet initiative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu
In this project, we have developed techniques for visualizing large-scale time-varying multivariate particle and field data produced by the GPS_TTBP team. Our basic approach to particle data visualization is to provide the user with an intuitive interactive interface for exploring the data. We have designed a multivariate filtering interface for scientists to effortlessly isolate those particles of interest for revealing structures in densely packed particles as well as the temporal behaviors of selected particles. With such a visualization system, scientists on the GPS-TTBP project can validate known relationships and temporal trends, and possibly gain new insights in their simulations. Wemore » have tested the system using over several millions of particles on a single PC. We will also need to address the scalability of the system to handle billions of particles using a cluster of PCs. To visualize the field data, we choose to use direct volume rendering. Because the data provided by PPPL is on a curvilinear mesh, several processing steps have to be taken. The mesh is curvilinear in nature, following the shape of a deformed torus. Additionally, in order to properly interpolate between the given slices we cannot use simple linear interpolation in Cartesian space but instead have to interpolate along the magnetic field lines given to us by the scientists. With these limitations, building a system that can provide an accurate visualization of the dataset is quite a challenge to overcome. In the end we use a combination of deformation methods such as deformation textures in order to fit a normal torus into their deformed torus, allowing us to store the data in toroidal coordinates in order to take advantage of modern GPUs to perform the interpolation along the field lines for us. The resulting new rendering capability produces visualizations at a quality and detail level previously not available to the scientists at the PPPL. In summary, in this project we have successfully created new capabilities for the scientists to visualize their 3D data at higher accuracy and quality, enhancing their ability to evaluate the simulations and understand the modeled phenomena.« less
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Putt, Charles W.
1997-01-01
The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.
Weakley, Jonathon Js; Wilson, Kyle M; Till, Kevin; Read, Dale B; Darrall-Jones, Joshua; Roe, Gregory; Phibbs, Padraic J; Jones, Ben
2017-07-12
It is unknown whether instantaneous visual feedback of resistance training outcomes can enhance barbell velocity in younger athletes. Therefore, the purpose of this study was to quantify the effects of visual feedback on mean concentric barbell velocity in the back squat, and to identify changes in motivation, competitiveness, and perceived workload. In a randomised-crossover design (Feedback vs. Control) feedback of mean concentric barbell velocity was or was not provided throughout a set of 10 repetitions in the barbell back squat. Magnitude-based inferences were used to assess changes between conditions, with almost certainly greater differences in mean concentric velocity between the Feedback (0.70 ±0.04 m·s) and Control (0.65 ±0.05 m·s) observed. Additionally, individual repetition mean concentric velocity ranged from possibly (repetition number two: 0.79 ±0.04 vs. 0.78 ±0.04 m·s) to almost certainly (repetition number 10: 0.58 ±0.05 vs. 0.49 ±0.05 m·s) greater when provided feedback, while almost certain differences were observed in motivation, competitiveness, and perceived workload, respectively. Providing adolescent male athletes with visual kinematic information while completing resistance training is beneficial for the maintenance of barbell velocity during a training set, potentially enhancing physical performance. Moreover, these improvements were observed alongside increases in motivation, competitiveness and perceived workload providing insight into the underlying mechanisms responsible for the performance gains observed. Given the observed maintenance of barbell velocity during a training set, practitioners can use this technique to manipulate training outcomes during resistance training.
Dong, Han; Sharma, Diksha; Badano, Aldo
2014-12-01
Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.
Analysis, Mining and Visualization Service at NCSA
NASA Astrophysics Data System (ADS)
Wilhelmson, R.; Cox, D.; Welge, M.
2004-12-01
NCSA's goal is to create a balanced system that fully supports high-end computing as well as: 1) high-end data management and analysis; 2) visualization of massive, highly complex data collections; 3) large databases; 4) geographically distributed Grid computing; and 5) collaboratories, all based on a secure computational environment and driven with workflow-based services. To this end NCSA has defined a new technology path that includes the integration and provision of cyberservices in support of data analysis, mining, and visualization. NCSA has begun to develop and apply a data mining system-NCSA Data-to-Knowledge (D2K)-in conjunction with both the application and research communities. NCSA D2K will enable the formation of model-based application workflows and visual programming interfaces for rapid data analysis. The Java-based D2K framework, which integrates analytical data mining methods with data management, data transformation, and information visualization tools, will be configurable from the cyberservices (web and grid services, tools, ..) viewpoint to solve a wide range of important data mining problems. This effort will use modules, such as a new classification methods for the detection of high-risk geoscience events, and existing D2K data management, machine learning, and information visualization modules. A D2K cyberservices interface will be developed to seamlessly connect client applications with remote back-end D2K servers, providing computational resources for data mining and integration with local or remote data stores. This work is being coordinated with SDSC's data and services efforts. The new NCSA Visualization embedded workflow environment (NVIEW) will be integrated with D2K functionality to tightly couple informatics and scientific visualization with the data analysis and management services. Visualization services will access and filter disparate data sources, simplifying tasks such as fusing related data from distinct sources into a coherent visual representation. This approach enables collaboration among geographically dispersed researchers via portals and front-end clients, and the coupling with data management services enables recording associations among datasets and building annotation systems into visualization tools and portals, giving scientists a persistent, shareable, virtual lab notebook. To facilitate provision of these cyberservices to the national community, NCSA will be providing a computational environment for large-scale data assimilation, analysis, mining, and visualization. This will be initially implemented on the new 512 processor shared memory SGI's recently purchased by NCSA. In addition to standard batch capabilities, NCSA will provide on-demand capabilities for those projects requiring rapid response (e.g., development of severe weather, earthquake events) for decision makers. It will also be used for non-sequential interactive analysis of data sets where it is important have access to large data volumes over space and time.
Visual preference and ecological assessments for designed alternative brownfield rehabilitations.
Lafortezza, Raffaele; Corry, Robert C; Sanesi, Giovanni; Brown, Robert D
2008-11-01
This paper describes an integrative method for quantifying, analyzing, and comparing the effects of alternative rehabilitation approaches with visual preference. The method was applied to a portion of a major industrial area located in southern Italy. Four alternative approaches to rehabilitation (alternative designs) were developed and analyzed. The scenarios consisted of the cleanup of the brownfields plus: (1) the addition of ground cover species; (2) the addition of ground cover species and a few trees randomly distributed; (3) the addition of ground cover species and a few trees in small groups; and (4) the addition of ground cover species and several trees in large groups. The approaches were analyzed and compared to the baseline condition through the use of cost-surface modeling (CSM) and visual preference assessment (VPA). Statistical results showed that alternatives that were more ecologically functional for forest bird species dispersal were also more visually preferable. Some differences were identified based on user groups and location of residence. The results of the study are used to identify implications for enhancing both ecological attributes and visual preferences of rehabilitating landscapes through planning and design.
Interface Metaphors for Interactive Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasper, Robert J.; Blaha, Leslie M.
To promote more interactive and dynamic machine learn- ing, we revisit the notion of user-interface metaphors. User-interface metaphors provide intuitive constructs for supporting user needs through interface design elements. A user-interface metaphor provides a visual or action pattern that leverages a user’s knowledge of another domain. Metaphors suggest both the visual representations that should be used in a display as well as the interactions that should be afforded to the user. We argue that user-interface metaphors can also offer a method of extracting interaction-based user feedback for use in machine learning. Metaphors offer indirect, context-based information that can be usedmore » in addition to explicit user inputs, such as user-provided labels. Implicit information from user interactions with metaphors can augment explicit user input for active learning paradigms. Or it might be leveraged in systems where explicit user inputs are more challenging to obtain. Each interaction with the metaphor provides an opportunity to gather data and learn. We argue this approach is especially important in streaming applications, where we desire machine learning systems that can adapt to dynamic, changing data.« less
Photo-realistic Terrain Modeling and Visualization for Mars Exploration Rover Science Operations
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Sims, Michael; Kunz, Clayton; Lees, David; Bowman, Judd
2005-01-01
Modern NASA planetary exploration missions employ complex systems of hardware and software managed by large teams of. engineers and scientists in order to study remote environments. The most complex and successful of these recent projects is the Mars Exploration Rover mission. The Computational Sciences Division at NASA Ames Research Center delivered a 30 visualization program, Viz, to the MER mission that provides an immersive, interactive environment for science analysis of the remote planetary surface. In addition, Ames provided the Athena Science Team with high-quality terrain reconstructions generated with the Ames Stereo-pipeline. The on-site support team for these software systems responded to unanticipated opportunities to generate 30 terrain models during the primary MER mission. This paper describes Viz, the Stereo-pipeline, and the experiences of the on-site team supporting the scientists at JPL during the primary MER mission.
Evaluating the decision accuracy and speed of clinical data visualizations.
Pieczkiewicz, David S; Finkelstein, Stanley M
2010-01-01
Clinicians face an increasing volume of biomedical data. Assessing the efficacy of systems that enable accurate and timely clinical decision making merits corresponding attention. This paper discusses the multiple-reader multiple-case (MRMC) experimental design and linear mixed models as means of assessing and comparing decision accuracy and latency (time) for decision tasks in which clinician readers must interpret visual displays of data. These tools can assess and compare decision accuracy and latency (time). These experimental and statistical techniques, used extensively in radiology imaging studies, offer a number of practical and analytic advantages over more traditional quantitative methods such as percent-correct measurements and ANOVAs, and are recommended for their statistical efficiency and generalizability. An example analysis using readily available, free, and commercial statistical software is provided as an appendix. While these techniques are not appropriate for all evaluation questions, they can provide a valuable addition to the evaluative toolkit of medical informatics research.
Attentional enhancement during multiple-object tracking.
Drew, Trafton; McCollough, Andrew W; Horowitz, Todd S; Vogel, Edward K
2009-04-01
What is the role of attention in multiple-object tracking? Does attention enhance target representations, suppress distractor representations, or both? It is difficult to ask this question in a purely behavioral paradigm without altering the very attentional allocation one is trying to measure. In the present study, we used event-related potentials to examine the early visual evoked responses to task-irrelevant probes without requiring an additional detection task. Subjects tracked two targets among four moving distractors and four stationary distractors. Brief probes were flashed on targets, moving distractors, stationary distractors, or empty space. We obtained a significant enhancement of the visually evoked P1 and N1 components (approximately 100-150 msec) for probes on targets, relative to distractors. Furthermore, good trackers showed larger differences between target and distractor probes than did poor trackers. These results provide evidence of early attentional enhancement of tracked target items and also provide a novel approach to measuring attentional allocation during tracking.
Helioviewer.org: Enhanced Solar & Heliospheric Data Visualization
NASA Astrophysics Data System (ADS)
Stys, J. E.; Ireland, J.; Hughitt, V. K.; Mueller, D.
2013-12-01
Helioviewer.org enables the simultaneous exploration of multiple heterogeneous solar data sets. In the latest iteration of this open-source web application, Hinode XRT and Yohkoh SXT join SDO, SOHO, STEREO, and PROBA2 as supported data sources. A newly enhanced user-interface expands the utility of Helioviewer.org by adding annotations backed by data from the Heliospheric Events Knowledgebase (HEK). Helioviewer.org can now overlay solar feature and event data via interactive marker pins, extended regions, data labels, and information panels. An interactive time-line provides enhanced browsing and visualization to image data set coverage and solar events. The addition of a size-of-the-Earth indicator provides a sense of the scale to solar and heliospheric features for education and public outreach purposes. Tight integration with the Virtual Solar Observatory and SDO AIA cutout service enable solar physicists to seamlessly import science data into their SSW/IDL or SunPy/Python data analysis environments.
Multi-Scale Surface Descriptors
Cipriano, Gregory; Phillips, George N.; Gleicher, Michael
2010-01-01
Local shape descriptors compactly characterize regions of a surface, and have been applied to tasks in visualization, shape matching, and analysis. Classically, curvature has be used as a shape descriptor; however, this differential property characterizes only an infinitesimal neighborhood. In this paper, we provide shape descriptors for surface meshes designed to be multi-scale, that is, capable of characterizing regions of varying size. These descriptors capture statistically the shape of a neighborhood around a central point by fitting a quadratic surface. They therefore mimic differential curvature, are efficient to compute, and encode anisotropy. We show how simple variants of mesh operations can be used to compute the descriptors without resorting to expensive parameterizations, and additionally provide a statistical approximation for reduced computational cost. We show how these descriptors apply to a number of uses in visualization, analysis, and matching of surfaces, particularly to tasks in protein surface analysis. PMID:19834190
Choi, Hyunseok; Cho, Byunghyun; Masamune, Ken; Hashizume, Makoto; Hong, Jaesung
2016-03-01
Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks. Copyright © 2015 John Wiley & Sons, Ltd.
Thanki, Anil S; Soranzo, Nicola; Haerty, Wilfried; Davey, Robert P
2018-03-01
Gene duplication is a major factor contributing to evolutionary novelty, and the contraction or expansion of gene families has often been associated with morphological, physiological, and environmental adaptations. The study of homologous genes helps us to understand the evolution of gene families. It plays a vital role in finding ancestral gene duplication events as well as identifying genes that have diverged from a common ancestor under positive selection. There are various tools available, such as MSOAR, OrthoMCL, and HomoloGene, to identify gene families and visualize syntenic information between species, providing an overview of syntenic regions evolution at the family level. Unfortunately, none of them provide information about structural changes within genes, such as the conservation of ancestral exon boundaries among multiple genomes. The Ensembl GeneTrees computational pipeline generates gene trees based on coding sequences, provides details about exon conservation, and is used in the Ensembl Compara project to discover gene families. A certain amount of expertise is required to configure and run the Ensembl Compara GeneTrees pipeline via command line. Therefore, we converted this pipeline into a Galaxy workflow, called GeneSeqToFamily, and provided additional functionality. This workflow uses existing tools from the Galaxy ToolShed, as well as providing additional wrappers and tools that are required to run the workflow. GeneSeqToFamily represents the Ensembl GeneTrees pipeline as a set of interconnected Galaxy tools, so they can be run interactively within the Galaxy's user-friendly workflow environment while still providing the flexibility to tailor the analysis by changing configurations and tools if necessary. Additional tools allow users to subsequently visualize the gene families produced by the workflow, using the Aequatus.js interactive tool, which has been developed as part of the Aequatus software project.
User-Centered Evaluation of Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean C.
Visual analytics systems are becoming very popular. More domains now use interactive visualizations to analyze the ever-increasing amount and heterogeneity of data. More novel visualizations are being developed for more tasks and users. We need to ensure that these systems can be evaluated to determine that they are both useful and usable. A user-centered evaluation for visual analytics needs to be developed for these systems. While many of the typical human-computer interaction (HCI) evaluation methodologies can be applied as is, others will need modification. Additionally, new functionality in visual analytics systems needs new evaluation methodologies. There is a difference betweenmore » usability evaluations and user-centered evaluations. Usability looks at the efficiency, effectiveness, and user satisfaction of users carrying out tasks with software applications. User-centered evaluation looks more specifically at the utility provided to the users by the software. This is reflected in the evaluations done and in the metrics used. In the visual analytics domain this is very challenging as users are most likely experts in a particular domain, the tasks they do are often not well defined, the software they use needs to support large amounts of different kinds of data, and often the tasks last for months. These difficulties are discussed more in the section on User-centered Evaluation. Our goal is to provide a discussion of user-centered evaluation practices for visual analytics, including existing practices that can be carried out and new methodologies and metrics that need to be developed and agreed upon by the visual analytics community. The material provided here should be of use for both researchers and practitioners in the field of visual analytics. Researchers and practitioners in HCI and interested in visual analytics will find this information useful as well as a discussion on changes that need to be made to current HCI practices to make them more suitable to visual analytics. A history of analysis and analysis techniques and problems is provided as well as an introduction to user-centered evaluation and various evaluation techniques for readers from different disciplines. The understanding of these techniques is imperative if we wish to support analysis in the visual analytics software we develop. Currently the evaluations that are conducted and published for visual analytics software are very informal and consist mainly of comments from users or potential users. Our goal is to help researchers in visual analytics to conduct more formal user-centered evaluations. While these are time-consuming and expensive to carryout, the outcomes of these studies will have a defining impact on the field of visual analytics and help point the direction for future features and visualizations to incorporate. While many researchers view work in user-centered evaluation as a less-than-exciting area to work, the opposite is true. First of all, the goal is user-centered evaluation is to help visual analytics software developers, researchers, and designers improve their solutions and discover creative ways to better accommodate their users. Working with the users is extremely rewarding as well. While we use the term “users” in almost all situations there are a wide variety of users that all need to be accommodated. Moreover, the domains that use visual analytics are varied and expanding. Just understanding the complexities of a number of these domains is exciting. Researchers are trying out different visualizations and interactions as well. And of course, the size and variety of data are expanding rapidly. User-centered evaluation in this context is rapidly changing. There are no standard processes and metrics and thus those of us working on user-centered evaluation must be creative in our work with both the users and with the researchers and developers.« less
Keenan, Kevin G; Huddleston, Wendy E; Ernest, Bradley E
2017-11-01
The purpose of the study was to determine the visual strategies used by older adults during a pinch grip task and to assess the relations between visual strategy, deficits in attention, and increased force fluctuations in older adults. Eye movements of 23 older adults (>65 yr) were monitored during a low-force pinch grip task while subjects viewed three common visual feedback displays. Performance on the Grooved Pegboard test and an attention task (which required no concurrent hand movements) was also measured. Visual strategies varied across subjects and depended on the type of visual feedback provided to the subjects. First, while viewing a high-gain compensatory feedback display (horizontal bar moving up and down with force), 9 of 23 older subjects adopted a strategy of performing saccades during the task, which resulted in 2.5 times greater force fluctuations in those that exhibited saccades compared with those who maintained fixation near the target line. Second, during pursuit feedback displays (force trace moving left to right across screen and up and down with force), all subjects exhibited multiple saccades, and increased force fluctuations were associated ( r s = 0.6; P = 0.002) with fewer saccades during the pursuit task. Also, decreased low-frequency (<4 Hz) force fluctuations and Grooved Pegboard times were significantly related ( P = 0.033 and P = 0.005, respectively) with higher (i.e., better) attention z scores. Comparison of these results with our previously published results in young subjects indicates that saccadic eye movements and attention are related to force control in older adults. NEW & NOTEWORTHY The significant contributions of the study are the addition of eye movement data and an attention task to explain differences in hand motor control across different visual displays in older adults. Older participants used different visual strategies across varying feedback displays, and saccadic eye movements were related with motor performance. In addition, those older individuals with deficits in attention had impaired motor performance on two different hand motor control tasks, including the Grooved Pegboard test. Copyright © 2017 the American Physiological Society.
Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B.; Dimas, Antigone S.; Gutierrez-Arcelus, Maria; Stranger, Barbara E.; Deloukas, Panos; Dermitzakis, Emmanouil T.
2010-01-01
Summary: Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. Availability: http://www.sanger.ac.uk/resources/software/genevar Contact: emmanouil.dermitzakis@unige.ch PMID:20702402
McNamee, R L; Eddy, W F
2001-12-01
Analysis of variance (ANOVA) is widely used for the study of experimental data. Here, the reach of this tool is extended to cover the preprocessing of functional magnetic resonance imaging (fMRI) data. This technique, termed visual ANOVA (VANOVA), provides both numerical and pictorial information to aid the user in understanding the effects of various parts of the data analysis. Unlike a formal ANOVA, this method does not depend on the mathematics of orthogonal projections or strictly additive decompositions. An illustrative example is presented and the application of the method to a large number of fMRI experiments is discussed. Copyright 2001 Wiley-Liss, Inc.
Angeles-Han, Sheila T; Rabinovich, Consuelo Egla
2016-09-01
The review provides updates on novel risk markers for the development of pediatric inflammatory uveitis and a severe disease course, on treatment of refractory disease, and on the measurement of visual outcomes. There are several new genetic markers, biomarkers, and clinical factors that may influence a child's uveitis disease course. It is important to identify children at risk for poor visual outcomes and who are refractory to traditional therapy. Racial disparities have recently been reported. We describe agents of potential benefit. In addition, we discuss the importance of patient reported outcomes in this population. Uveitis can lead to vision-threatening complications. Timely and aggressive treatment of children identified to be at risk for a severe uveitis course may lead to improved outcomes.
Removal of phosphate from greenhouse wastewater using hydrated lime.
Dunets, C Siobhan; Zheng, Youbin
2014-01-01
Phosphate (P) contamination in nutrient-laden wastewater is currently a major topic of discussion in the North American greenhouse industry. Precipitation of P as calcium phosphate minerals using hydrated lime could provide a simple, inexpensive method for retrieval. A combination of batch experiments and chemical equilibrium modelling was used to confirm the viability of this P removal method and determine lime addition rates and pH requirements for greenhouse wastewater of varying nutrient compositions. Lime: P ratio (molar ratio of CaMg(OH)₄: PO₄‒P) provided a consistent parameter for estimating lime addition requirements regardless of initial P concentration, with a ratio of 1.5 providing around 99% removal of dissolved P. Optimal P removal occurred when lime addition increased the pH from 8.6 to 9.0, suggesting that pH monitoring during the P removal process could provide a simple method for ensuring consistent adherence to P removal standards. A Visual MINTEQ model, validated using experimental data, provided a means of predicting lime addition and pH requirements as influenced by changes in other parameters of the lime-wastewater system (e.g. calcium concentration, temperature, and initial wastewater pH). Hydrated lime addition did not contribute to the removal of macronutrient elements such as nitrate and ammonium, but did decrease the concentration of some micronutrients. This study provides basic guidance for greenhouse operators to use hydrated lime for phosphate removal from greenhouse wastewater.
Coherent visualization of spatial data adapted to roles, tasks, and hardware
NASA Astrophysics Data System (ADS)
Wagner, Boris; Peinsipp-Byma, Elisabeth
2012-06-01
Modern crisis management requires that users with different roles and computer environments have to deal with a high volume of various data from different sources. For this purpose, Fraunhofer IOSB has developed a geographic information system (GIS) which supports the user depending on available data and the task he has to solve. The system provides merging and visualization of spatial data from various civilian and military sources. It supports the most common spatial data standards (OGC, STANAG) as well as some proprietary interfaces, regardless if these are filebased or database-based. To set the visualization rules generic Styled Layer Descriptors (SLDs) are used, which are an Open Geospatial Consortium (OGC) standard. SLDs allow specifying which data are shown, when and how. The defined SLDs consider the users' roles and task requirements. In addition it is possible to use different displays and the visualization also adapts to the individual resolution of the display. Too high or low information density is avoided. Also, our system enables users with different roles to work together simultaneously using the same data base. Every user is provided with the appropriate and coherent spatial data depending on his current task. These so refined spatial data are served via the OGC services Web Map Service (WMS: server-side rendered raster maps), or the Web Map Tile Service - (WMTS: pre-rendered and cached raster maps).
Colombet, B; Woodman, M; Badier, J M; Bénar, C G
2015-03-15
The importance of digital signal processing in clinical neurophysiology is growing steadily, involving clinical researchers and methodologists. There is a need for crossing the gap between these communities by providing efficient delivery of newly designed algorithms to end users. We have developed such a tool which both visualizes and processes data and, additionally, acts as a software development platform. AnyWave was designed to run on all common operating systems. It provides access to a variety of data formats and it employs high fidelity visualization techniques. It also allows using external tools as plug-ins, which can be developed in languages including C++, MATLAB and Python. In the current version, plug-ins allow computation of connectivity graphs (non-linear correlation h2) and time-frequency representation (Morlet wavelets). The software is freely available under the LGPL3 license. AnyWave is designed as an open, highly extensible solution, with an architecture that permits rapid delivery of new techniques to end users. We have developed AnyWave software as an efficient neurophysiological data visualizer able to integrate state of the art techniques. AnyWave offers an interface well suited to the needs of clinical research and an architecture designed for integrating new tools. We expect this software to strengthen the collaboration between clinical neurophysiologists and researchers in biomedical engineering and signal processing. Copyright © 2015 Elsevier B.V. All rights reserved.
Alio, Jorge L; Plaza-Puche, Ana B; Javaloy, Jaime; Ayala, María José; Moreno, Luis J; Piñero, David P
2012-03-01
To compare the visual acuity outcomes and ocular optical performance of eyes implanted with a multifocal refractive intraocular lens (IOL) with an inferior segmental near add or a diffractive multifocal IOL. Prospective, comparative, nonrandomized, consecutive case series. Eighty-three consecutive eyes of 45 patients (age range, 36-82 years) with cataract were divided into 2 groups: group A, 45 eyes implanted with Lentis Mplus LS-312 (Oculentis GmbH, Berlin, Germany); group B, 38 eyes implanted with diffractive IOL Acri.Lisa 366D (Zeiss, Oberkochen, Germany). All patients underwent phacoemulsification followed by IOL implantation in the capsular bag. Distance corrected, intermediate, and near with the distance correction visual acuity outcomes and contrast sensitivity, intraocular aberrations, and defocus curve were evaluated postoperatively during a 3-month follow-up. Uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), uncorrected near visual acuity (UNVA), corrected distance near and intermediate visual acuity (CDNVA), contrast sensitivity, intraocular aberrations, and defocus curve. A significant improvement in UDVA, CDVA, and UNVA was observed in both groups after surgery (P ≤ 0.04). Significantly better values of UNVA (P<0.01) and CDNVA (P<0.04) were found in group B. In the defocus curve, significantly better visual acuities were present in eyes in group A for intermediate vision levels of defocus (P ≤ 0.04). Significantly higher amounts of postoperative intraocular primary coma and spherical aberrations were found in group A (P<0.01). In addition, significantly better values were observed in photopic contrast sensitivity for high spatial frequencies in group A (P ≤ 0.04). The Lentis Mplus LS-312 and Acri.Lisa 366D IOLs are able to successfully restore visual function after cataract surgery. The Lentis Mplus LS-312 provided better intermediate vision and contrast sensitivity outcomes than the Acri.Lisa 366D. However, the Acri.Lisa design provided better distance and near visual outcomes and intraocular optical performance parameters. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Lee, Taein; Cheng, Chun-Huai; Ficklin, Stephen; Yu, Jing; Humann, Jodi; Main, Dorrie
2017-01-01
Abstract Tripal is an open-source database platform primarily used for development of genomic, genetic and breeding databases. We report here on the release of the Chado Loader, Chado Data Display and Chado Search modules to extend the functionality of the core Tripal modules. These new extension modules provide additional tools for (1) data loading, (2) customized visualization and (3) advanced search functions for supported data types such as organism, marker, QTL/Mendelian Trait Loci, germplasm, map, project, phenotype, genotype and their respective metadata. The Chado Loader module provides data collection templates in Excel with defined metadata and data loaders with front end forms. The Chado Data Display module contains tools to visualize each data type and the metadata which can be used as is or customized as desired. The Chado Search module provides search and download functionality for the supported data types. Also included are the tools to visualize map and species summary. The use of materialized views in the Chado Search module enables better performance as well as flexibility of data modeling in Chado, allowing existing Tripal databases with different metadata types to utilize the module. These Tripal Extension modules are implemented in the Genome Database for Rosaceae (rosaceae.org), CottonGen (cottongen.org), Citrus Genome Database (citrusgenomedb.org), Genome Database for Vaccinium (vaccinium.org) and the Cool Season Food Legume Database (coolseasonfoodlegume.org). Database URL: https://www.citrusgenomedb.org/, https://www.coolseasonfoodlegume.org/, https://www.cottongen.org/, https://www.rosaceae.org/, https://www.vaccinium.org/
Automated UAV-based video exploitation using service oriented architecture framework
NASA Astrophysics Data System (ADS)
Se, Stephen; Nadeau, Christian; Wood, Scott
2011-05-01
Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.
Maser: one-stop platform for NGS big data from analysis to visualization
Kinjo, Sonoko; Monma, Norikazu; Misu, Sadahiko; Kitamura, Norikazu; Imoto, Junichi; Yoshitake, Kazutoshi; Gojobori, Takashi; Ikeo, Kazuho
2018-01-01
Abstract A major challenge in analyzing the data from high-throughput next-generation sequencing (NGS) is how to handle the huge amounts of data and variety of NGS tools and visualize the resultant outputs. To address these issues, we developed a cloud-based data analysis platform, Maser (Management and Analysis System for Enormous Reads), and an original genome browser, Genome Explorer (GE). Maser enables users to manage up to 2 terabytes of data to conduct analyses with easy graphical user interface operations and offers analysis pipelines in which several individual tools are combined as a single pipeline for very common and standard analyses. GE automatically visualizes genome assembly and mapping results output from Maser pipelines, without requiring additional data upload. With this function, the Maser pipelines can graphically display the results output from all the embedded tools and mapping results in a web browser. Therefore Maser realized a more user-friendly analysis platform especially for beginners by improving graphical display and providing the selected standard pipelines that work with built-in genome browser. In addition, all the analyses executed on Maser are recorded in the analysis history, helping users to trace and repeat the analyses. The entire process of analysis and its histories can be shared with collaborators or opened to the public. In conclusion, our system is useful for managing, analyzing, and visualizing NGS data and achieves traceability, reproducibility, and transparency of NGS analysis. Database URL: http://cell-innovation.nig.ac.jp/maser/ PMID:29688385
Deng, Yanjia; Shi, Lin; Lei, Yi; Liang, Peipeng; Li, Kuncheng; Chu, Winnie C. W.; Wang, Defeng
2016-01-01
The human cortical regions for processing high-level visual (HLV) functions of different categories remain ambiguous, especially in terms of their conjunctions and specifications. Moreover, the neurobiology of declined HLV functions in patients with Alzheimer's disease (AD) has not been fully investigated. This study provides a functionally sorted overview of HLV cortices for processing “what” and “where” visual perceptions and it investigates their atrophy in AD and MCI patients. Based upon activation likelihood estimation (ALE), brain regions responsible for processing five categories of visual perceptions included in “what” and “where” visions (i.e., object, face, word, motion, and spatial visions) were analyzed, and subsequent contrast analyses were performed to show regions with conjunctive and specific activations for processing these visual functions. Next, based on the resulting ALE maps, the atrophy of HLV cortices in AD and MCI patients was evaluated using voxel-based morphometry. Our ALE results showed brain regions for processing visual perception across the five categories, as well as areas of conjunction and specification. Our comparisons of gray matter (GM) volume demonstrated atrophy of three “where” visual cortices in late MCI group and extensive atrophy of HLV cortices (25 regions in both “what” and “where” visual cortices) in AD group. In addition, the GM volume of atrophied visual cortices in AD and MCI subjects was found to be correlated to the deterioration of overall cognitive status and to the cognitive performances related to memory, execution, and object recognition functions. In summary, these findings may add to our understanding of HLV network organization and of the evolution of visual perceptual dysfunction in AD as the disease progresses. PMID:27445770
A neural computational model for animal's time-to-collision estimation.
Wang, Ling; Yao, Dezhong
2013-04-17
The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.
Cognitive functioning following traumatic brain injury: A five-year follow-up.
Marsh, Nigel V; Ludbrook, Maria R; Gaffaney, Lauren C
2016-01-01
To describe the long-term prevalence and severity of cognitive deficits following significant (i.e., ventilation required for >24 hours) traumatic brain injury. To assess a comprehensive range of cognitive functions using psychometric measures with established normative, reliability, and validity data. A group of 71 adults was assessed at approximately five years (mean = 66 months) following injury. Assessment of cognitive functioning covered the domains of intelligence, attention, verbal and visual memory, visual-spatial construction, and executive functions. Impairment was evident across all domains but prevalence varied both within and between domains. Across aspects of intelligence clinical impairment ranged from 8-25% , attention 39-62% , verbal memory 16-46% , visual memory 23-51% , visual-spatial construction 38% , and executive functions (verbal fluency) 13% . In addition, 3-23% of performances across the measures were in the borderline range, suggesting a high prevalence of subclinical deficit. Although the prevalence of impairment may vary across cognitive domains, long-term follow-up documented deficits in all six domains. These findings provide further evidence that while improvement of cognitive functioning following significant traumatic brain injury may be possible, recovery of function is unlikely.
Enhancements to VTK enabling Scientific Visualization in Immersive Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Leary, Patrick; Jhaveri, Sankhesh; Chaudhary, Aashish
Modern scientific, engineering and medical computational sim- ulations, as well as experimental and observational data sens- ing/measuring devices, produce enormous amounts of data. While statistical analysis provides insight into this data, scientific vi- sualization is tactically important for scientific discovery, prod- uct design and data analysis. These benefits are impeded, how- ever, when scientific visualization algorithms are implemented from scratch—a time-consuming and redundant process in im- mersive application development. This process can greatly ben- efit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR)more » environment has only been attempted to varying degrees of success. In this pa- per, we demonstrate two new approaches to simplify this amalga- mation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that pro- vide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications.« less
Visual analytics of inherently noisy crowdsourced data on ultra high resolution displays
NASA Astrophysics Data System (ADS)
Huynh, Andrew; Ponto, Kevin; Lin, Albert Yu-Min; Kuester, Falko
The increasing prevalence of distributed human microtasking, crowdsourcing, has followed the exponential increase in data collection capabilities. The large scale and distributed nature of these microtasks produce overwhelming amounts of information that is inherently noisy due to the nature of human input. Furthermore, these inputs create a constantly changing dataset with additional information added on a daily basis. Methods to quickly visualize, filter, and understand this information over temporal and geospatial constraints is key to the success of crowdsourcing. This paper present novel methods to visually analyze geospatial data collected through crowdsourcing on top of remote sensing satellite imagery. An ultra high resolution tiled display system is used to explore the relationship between human and satellite remote sensing data at scale. A case study is provided that evaluates the presented technique in the context of an archaeological field expedition. A team in the field communicated in real-time with and was guided by researchers in the remote visual analytics laboratory, swiftly sifting through incoming crowdsourced data to identify target locations that were identified as viable archaeological sites.
Visual Aid Tool to Improve Decision Making in Anticoagulation for Stroke Prevention.
Saposnik, Gustavo; Joundi, Raed A
2016-10-01
The management of stroke prevention among patients with atrial fibrillation (AF) has changed in the last few years. Despite the benefits of new oral anticoagulants (NOACs), decisions about the optimal agent remain a challenge. We provide a visual aid tool to guide clinicians and patients in the decision process of selecting oral anticoagulants for stroke prevention. We created visual plots representing benefits of warfarin versus NOACs from a meta-analysis comprising 58,541 participants. Visual plots (Cates plots) were created using software available at nntonline.net. The primary outcome was stroke or systemic embolism during the study period. In the chosen meta-analysis, 29,312 participants received a NOAC and 29,229 participants received warfarin. For every 1000 patients with AF, 38 would have a stroke or systemic embolic event in the warfarin group compared to 31 in the NOAC group (RR .81; 95% CI .73-.91). Fifteen patients would develop an intracranial hemorrhage in the warfarin group compared to 7 in the NOAC group (RR .48; 95% CI .39-.59). Conversely, 25 patients would develop gastrointestinal bleeding in the NOAC group compared to 20 in the warfarin group (RR 1.25; 95% CI 1.01-1.55). For every 1000 treated individuals with AF, NOACs would prevent stroke or systemic embolism in 7 additional patients and cerebral hemorrhage in 8 additional patients compared to warfarin. On the other hand, 5 more patients would develop gastrointestinal bleeding with NOACs compared to warfarin. These data are visually shown in Cates plots, facilitating conversations with patients regarding anticoagulation decisions. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.
The functional neuroanatomy of multitasking: combining dual tasking with a short term memory task.
Deprez, Sabine; Vandenbulcke, Mathieu; Peeters, Ron; Emsell, Louise; Amant, Frederic; Sunaert, Stefan
2013-09-01
Insight into the neural architecture of multitasking is crucial when investigating the pathophysiology of multitasking deficits in clinical populations. Presently, little is known about how the brain combines dual-tasking with a concurrent short-term memory task, despite the relevance of this mental operation in daily life and the frequency of complaints related to this process, in disease. In this study we aimed to examine how the brain responds when a memory task is added to dual-tasking. Thirty-three right-handed healthy volunteers (20 females, mean age 39.9 ± 5.8) were examined with functional brain imaging (fMRI). The paradigm consisted of two cross-modal single tasks (a visual and auditory temporal same-different task with short delay), a dual-task combining both single tasks simultaneously and a multi-task condition, combining the dual-task with an additional short-term memory task (temporal same-different visual task with long delay). Dual-tasking compared to both individual visual and auditory single tasks activated a predominantly right-sided fronto-parietal network and the cerebellum. When adding the additional short-term memory task, a larger and more bilateral frontoparietal network was recruited. We found enhanced activity during multitasking in components of the network that were already involved in dual-tasking, suggesting increased working memory demands, as well as recruitment of multitask-specific components including areas that are likely to be involved in online holding of visual stimuli in short-term memory such as occipito-temporal cortex. These results confirm concurrent neural processing of a visual short-term memory task during dual-tasking and provide evidence for an effective fMRI multitasking paradigm. © 2013 Elsevier Ltd. All rights reserved.
Douglas, Graeme; Pavey, Sue; Corcoran, Christine; Eperjesi, Frank
2010-11-01
Network 1000 is a UK-based panel survey of a representative sample of adults with registered visual impairment, with the aim of gathering information about people's opinions and circumstances. Participants were interviewed (Survey 1, n = 1007: 2005; Survey 2, n = 922: 2006/07) on a range of topics including the nature of their eye condition, details of other health issues, use of low vision aids (LVAs) and their experiences in eye clinics. Eleven percent of individuals did not know the name of their eye condition. Seventy percent of participants reported having long-term health problems or disabilities in addition to visual impairment and 43% reported having hearing difficulties. Seventy one percent reported using LVAs for reading tasks. Participants who had become registered as visually impaired in the previous 8 years (n = 395) were asked questions about non-medical information received in the eye clinic around that time. Reported information received included advice about 'registration' (48%), low vision aids (45%) and social care routes (43%); 17% reported receiving no information. While 70% of people were satisfied with the information received, this was lower for those of working age (56%) compared with retirement age (72%). Those who recalled receiving additional non-medical information and advice at the time of registration also recalled their experiences more positively. Whilst caution should be applied to the accuracy of recall of past events, the data provide a valuable insight into the types of information and support that visually impaired people feel they would benefit from in the eye clinic. © 2010 The Authors. Ophthalmic and Physiological Optics © 2010 The College of Optometrists.
UCSC genome browser: deep support for molecular biomedical research.
Mangan, Mary E; Williams, Jennifer M; Lathe, Scott M; Karolchik, Donna; Lathe, Warren C
2008-01-01
The volume and complexity of genomic sequence data, and the additional experimental data required for annotation of the genomic context, pose a major challenge for display and access for biomedical researchers. Genome browsers organize this data and make it available in various ways to extract useful information to advance research projects. The UCSC Genome Browser is one of these resources. The official sequence data for a given species forms the framework to display many other types of data such as expression, variation, cross-species comparisons, and more. Visual representations of the data are available for exploration. Data can be queried with sequences. Complex database queries are also easily achieved with the Table Browser interface. Associated tools permit additional query types or access to additional data sources such as images of in situ localizations. Support for solving researcher's issues is provided with active discussion mailing lists and by providing updated training materials. The UCSC Genome Browser provides a source of deep support for a wide range of biomedical molecular research (http://genome.ucsc.edu).
MEG/EEG Source Reconstruction, Statistical Evaluation, and Visualization with NUTMEG
Dalal, Sarang S.; Zumer, Johanna M.; Guggisberg, Adrian G.; Trumpis, Michael; Wong, Daniel D. E.; Sekihara, Kensuke; Nagarajan, Srikantan S.
2011-01-01
NUTMEG is a source analysis toolbox geared towards cognitive neuroscience researchers using MEG and EEG, including intracranial recordings. Evoked and unaveraged data can be imported to the toolbox for source analysis in either the time or time-frequency domains. NUTMEG offers several variants of adaptive beamformers, probabilistic reconstruction algorithms, as well as minimum-norm techniques to generate functional maps of spatiotemporal neural source activity. Lead fields can be calculated from single and overlapping sphere head models or imported from other software. Group averages and statistics can be calculated as well. In addition to data analysis tools, NUTMEG provides a unique and intuitive graphical interface for visualization of results. Source analyses can be superimposed onto a structural MRI or headshape to provide a convenient visual correspondence to anatomy. These results can also be navigated interactively, with the spatial maps and source time series or spectrogram linked accordingly. Animations can be generated to view the evolution of neural activity over time. NUTMEG can also display brain renderings and perform spatial normalization of functional maps using SPM's engine. As a MATLAB package, the end user may easily link with other toolboxes or add customized functions. PMID:21437174
MEG/EEG source reconstruction, statistical evaluation, and visualization with NUTMEG.
Dalal, Sarang S; Zumer, Johanna M; Guggisberg, Adrian G; Trumpis, Michael; Wong, Daniel D E; Sekihara, Kensuke; Nagarajan, Srikantan S
2011-01-01
NUTMEG is a source analysis toolbox geared towards cognitive neuroscience researchers using MEG and EEG, including intracranial recordings. Evoked and unaveraged data can be imported to the toolbox for source analysis in either the time or time-frequency domains. NUTMEG offers several variants of adaptive beamformers, probabilistic reconstruction algorithms, as well as minimum-norm techniques to generate functional maps of spatiotemporal neural source activity. Lead fields can be calculated from single and overlapping sphere head models or imported from other software. Group averages and statistics can be calculated as well. In addition to data analysis tools, NUTMEG provides a unique and intuitive graphical interface for visualization of results. Source analyses can be superimposed onto a structural MRI or headshape to provide a convenient visual correspondence to anatomy. These results can also be navigated interactively, with the spatial maps and source time series or spectrogram linked accordingly. Animations can be generated to view the evolution of neural activity over time. NUTMEG can also display brain renderings and perform spatial normalization of functional maps using SPM's engine. As a MATLAB package, the end user may easily link with other toolboxes or add customized functions.
Global Positioning System Synchronized Active Light Autonomous Docking System
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Bell, Joseph L. (Inventor)
1996-01-01
A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprising at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.
Global Positioning System Synchronized Active Light Autonomous Docking System
NASA Technical Reports Server (NTRS)
Howard, Richard (Inventor)
1994-01-01
A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprises at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.
Breaking continuous flash suppression: competing for consciousness on the pre-semantic battlefield
Gayet, Surya; Van der Stigchel, Stefan; Paffen, Chris L. E.
2014-01-01
Traditionally, interocular suppression is believed to disrupt high-level (i.e., semantic or conceptual) processing of the suppressed visual input. The development of a new experimental paradigm, breaking continuous flash suppression (b-CFS), has caused a resurgence of studies demonstrating high-level processing of visual information in the absence of visual awareness. In this method the time it takes for interocularly suppressed stimuli to breach the threshold of visibility, is regarded as a measure of access to awareness. The aim of the current review is twofold. First, we provide an overview of the literature using this b-CFS method, while making a distinction between two types of studies: those in which suppression durations are compared between different stimulus classes (such as upright faces versus inverted faces), and those in which suppression durations are compared for stimuli that either match or mismatch concurrently available information (such as a colored target that either matches or mismatches a color retained in working memory). Second, we aim at dissociating high-level processing from low-level (i.e., crude visual) processing of the suppressed stimuli. For this purpose, we include a thorough review of the control conditions that are used in these experiments. Additionally, we provide recommendations for proper control conditions that we deem crucial for disentangling high-level from low-level effects. Based on this review, we argue that crude visual processing suffices for explaining differences in breakthrough times reported using b-CFS. As such, we conclude that there is as yet no reason to assume that interocularly suppressed stimuli receive full semantic analysis. PMID:24904476
A normalization model suggests that attention changes the weighting of inputs between visual areas
Cohen, Marlene R.
2017-01-01
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1–MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations. PMID:28461501
A normalization model suggests that attention changes the weighting of inputs between visual areas.
Ruff, Douglas A; Cohen, Marlene R
2017-05-16
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.
NASA Astrophysics Data System (ADS)
Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong
2006-03-01
Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.
van den Heuvel, Maarten R C; van Wegen, Erwin E H; de Goede, Cees J T; Burgers-Bots, Ingrid A L; Beek, Peter J; Daffertshofer, Andreas; Kwakkel, Gert
2013-10-04
Patients with Parkinson's disease often suffer from reduced mobility due to impaired postural control. Balance exercises form an integral part of rehabilitative therapy but the effectiveness of existing interventions is limited. Recent technological advances allow for providing enhanced visual feedback in the context of computer games, which provide an attractive alternative to conventional therapy. The objective of this randomized clinical trial is to investigate whether a training program capitalizing on virtual-reality-based visual feedback is more effective than an equally-dosed conventional training in improving standing balance performance in patients with Parkinson's disease. Patients with idiopathic Parkinson's disease will participate in a five-week balance training program comprising ten treatment sessions of 60 minutes each. Participants will be randomly allocated to (1) an experimental group that will receive balance training using augmented visual feedback, or (2) a control group that will receive balance training in accordance with current physical therapy guidelines for Parkinson's disease patients. Training sessions consist of task-specific exercises that are organized as a series of workstations. Assessments will take place before training, at six weeks, and at twelve weeks follow-up. The functional reach test will serve as the primary outcome measure supplemented by comprehensive assessments of functional balance, posturography, and electroencephalography. We hypothesize that balance training based on visual feedback will show greater improvements on standing balance performance than conventional balance training. In addition, we expect that learning new control strategies will be visible in the co-registered posturographic recordings but also through changes in functional connectivity.
The rainfall plot: its motivation, characteristics and pitfalls.
Domanska, Diana; Vodák, Daniel; Lund-Andersen, Christin; Salvatore, Stefania; Hovig, Eivind; Sandve, Geir Kjetil
2017-05-18
A visualization referred to as rainfall plot has recently gained popularity in genome data analysis. The plot is mostly used for illustrating the distribution of somatic cancer mutations along a reference genome, typically aiming to identify mutation hotspots. In general terms, the rainfall plot can be seen as a scatter plot showing the location of events on the x-axis versus the distance between consecutive events on the y-axis. Despite its frequent use, the motivation for applying this particular visualization and the appropriateness of its usage have never been critically addressed in detail. We show that the rainfall plot allows visual detection even for events occurring at high frequency over very short distances. In addition, event clustering at multiple scales may be detected as distinct horizontal bands in rainfall plots. At the same time, due to the limited size of standard figures, rainfall plots might suffer from inability to distinguish overlapping events, especially when multiple datasets are plotted in the same figure. We demonstrate the consequences of plot congestion, which results in obscured visual data interpretations. This work provides the first comprehensive survey of the characteristics and proper usage of rainfall plots. We find that the rainfall plot is able to convey a large amount of information without any need for parameterization or tuning. However, we also demonstrate how plot congestion and the use of a logarithmic y-axis may result in obscured visual data interpretations. To aid the productive utilization of rainfall plots, we demonstrate their characteristics and potential pitfalls using both simulated and real data, and provide a set of practical guidelines for their proper interpretation and usage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geveci, Berk
The purpose of the SDAV institute is to provide tools and expertise in scientific data management, analysis, and visualization to DOE’s application scientists. Our goal is to actively work with application teams to assist them in achieving breakthrough science, and to provide technical solutions in the data management, analysis, and visualization regimes that are broadly used by the computational science community. Over the last 5 years members of our institute worked directly with application scientists and DOE leadership-class facilities to assist them by applying the best tools and technologies at our disposal. We also enhanced our tools based on inputmore » from scientists on their needs. Many of the applications we have been working with are based on connections with scientists established in previous years. However, we contacted additional scientists though our outreach activities, as well as engaging application teams running on leading DOE computing systems. Our approach is to employ an evolutionary development and deployment process: first considering the application of existing tools, followed by the customization necessary for each particular application, and then the deployment in real frameworks and infrastructures. The institute is organized into three areas, each with area leaders, who keep track of progress, engagement of application scientists, and results. The areas are: (1) Data Management, (2) Data Analysis, and (3) Visualization. Kitware has been involved in the Visualization area. This report covers Kitware’s contributions over the last 5 years (February 2012 – February 2017). For details on the work performed by the SDAV institute as a whole, please see the SDAV final report.« less
ERIC Educational Resources Information Center
Schuett, Susanne; Kentridge, Robert W.; Zihl, Josef; Heywood, Charles A.
2009-01-01
Hemianopic reading and visual exploration impairments are well-known clinical phenomena. Yet, it is unclear whether they are primarily caused by the hemianopic visual field defect itself or by additional brain injury preventing efficient spontaneous oculomotor adaptation. To establish the extent to which these impairments are visually elicited we…
The Origin of Chondrules and Chondrites
NASA Astrophysics Data System (ADS)
Sears, Derek W. G.
2005-01-01
Drawing on research from the various scientific disciplines involved, this text summarizes the origin and history of chondrules and chondrites. Including citations to every published paper on the topic, it forms a comprehensive bibliography of the latest research. In addition, extensive illustrations provide a clear visual representation of the scientific theories. The text will be a valuable reference for graduate students and researchers in planetary science, geology and astronomy.
Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis
2017-01-01
Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.
Visual inspection reliability for precision manufactured parts
See, Judi E.
2015-09-04
Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. In addition visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied.
Discovery of Marine Datasets and Geospatial Metadata Visualization
NASA Astrophysics Data System (ADS)
Schwehr, K. D.; Brennan, R. T.; Sellars, J.; Smith, S.
2009-12-01
NOAA's National Geophysical Data Center (NGDC) provides the deep archive of US multibeam sonar hydrographic surveys. NOAA stores the data as Bathymetric Attributed Grids (BAG; http://www.opennavsurf.org/) that are HDF5 formatted files containing gridded bathymetry, gridded uncertainty, and XML metadata. While NGDC provides the deep store and a basic ERSI ArcIMS interface to the data, additional tools need to be created to increase the frequency with which researchers discover hydrographic surveys that might be beneficial for their research. Using Open Source tools, we have created a draft of a Google Earth visualization of NOAA's complete collection of BAG files as of March 2009. Each survey is represented as a bounding box, an optional preview image of the survey data, and a pop up placemark. The placemark contains a brief summary of the metadata and links to directly download of the BAG survey files and the complete metadata file. Each survey is time tagged so that users can search both in space and time for surveys that meet their needs. By creating this visualization, we aim to make the entire process of data discovery, validation of relevance, and download much more efficient for research scientists who may not be familiar with NOAA's hydrographic survey efforts or the BAG format. In the process of creating this demonstration, we have identified a number of improvements that can be made to the hydrographic survey process in order to make the results easier to use especially with respect to metadata generation. With the combination of the NGDC deep archiving infrastructure, a Google Earth virtual globe visualization, and GeoRSS feeds of updates, we hope to increase the utilization of these high-quality gridded bathymetry. This workflow applies equally well to LIDAR topography and bathymetry. Additionally, with proper referencing and geotagging in journal publications, we hope to close the loop and help the community create a true “Geospatial Scholar” infrastructure.
Functional optics of glossy buttercup flowers.
van der Kooi, Casper J; Elzenga, J Theo M; Dijksterhuis, Jan; Stavenga, Doekele G
2017-02-01
Buttercup ( Ranunculus spp.) flowers are exceptional because they feature a distinct gloss (mirror-like reflection) in addition to their matte-yellow coloration. We investigated the optical properties of yellow petals of several Ranunculus and related species using (micro)spectrophotometry and anatomical methods. The contribution of different petal structures to the overall visual signal was quantified using a recently developed optical model. We show that the coloration of glossy buttercup flowers is due to a rare combination of structural and pigmentary coloration. A very flat, pigment-filled upper epidermis acts as a thin-film reflector yielding the gloss, and additionally serves as a filter for light backscattered by the strongly scattering starch and mesophyll layers, which yields the matte-yellow colour. We discuss the evolution of the gloss and its two likely functions: it provides a strong visual signal to insect pollinators and increases the reflection of sunlight to the centre of the flower in order to heat the reproductive organs. © 2017 The Author(s).
Visualization of polymer relaxation in viscoelastic turbulent micro-channel flow.
Tai, Jiayan; Lim, Chun Ping; Lam, Yee Cheong
2015-11-13
In micro-channels, the flow of viscous liquids e.g. water, is laminar due to the low Reynolds number in miniaturized dimensions. An aqueous solution becomes viscoelastic with a minute amount of polymer additives; its flow behavior can become drastically different and turbulent. However, the molecules are typically invisible. Here we have developed a novel visualization technique to examine the extension and relaxation of polymer molecules at high flow velocities in a viscoelastic turbulent flow. Using high speed videography to observe the fluorescein labeled molecules, we show that viscoelastic turbulence is caused by the sporadic, non-uniform release of energy by the polymer molecules. This developed technique allows the examination of a viscoelastic liquid at the molecular level, and demonstrates the inhomogeneity of viscoelastic liquids as a result of molecular aggregation. It paves the way for a deeper understanding of viscoelastic turbulence, and could provide some insights on the high Weissenberg number problem. In addition, the technique may serve as a useful tool for the investigations of polymer drag reduction.
Visualization of polymer relaxation in viscoelastic turbulent micro-channel flow
NASA Astrophysics Data System (ADS)
Tai, Jiayan; Lim, Chun Ping; Lam, Yee Cheong
2015-11-01
In micro-channels, the flow of viscous liquids e.g. water, is laminar due to the low Reynolds number in miniaturized dimensions. An aqueous solution becomes viscoelastic with a minute amount of polymer additives; its flow behavior can become drastically different and turbulent. However, the molecules are typically invisible. Here we have developed a novel visualization technique to examine the extension and relaxation of polymer molecules at high flow velocities in a viscoelastic turbulent flow. Using high speed videography to observe the fluorescein labeled molecules, we show that viscoelastic turbulence is caused by the sporadic, non-uniform release of energy by the polymer molecules. This developed technique allows the examination of a viscoelastic liquid at the molecular level, and demonstrates the inhomogeneity of viscoelastic liquids as a result of molecular aggregation. It paves the way for a deeper understanding of viscoelastic turbulence, and could provide some insights on the high Weissenberg number problem. In addition, the technique may serve as a useful tool for the investigations of polymer drag reduction.
Visualization of polymer relaxation in viscoelastic turbulent micro-channel flow
Tai, Jiayan; Lim, Chun Ping; Lam, Yee Cheong
2015-01-01
In micro-channels, the flow of viscous liquids e.g. water, is laminar due to the low Reynolds number in miniaturized dimensions. An aqueous solution becomes viscoelastic with a minute amount of polymer additives; its flow behavior can become drastically different and turbulent. However, the molecules are typically invisible. Here we have developed a novel visualization technique to examine the extension and relaxation of polymer molecules at high flow velocities in a viscoelastic turbulent flow. Using high speed videography to observe the fluorescein labeled molecules, we show that viscoelastic turbulence is caused by the sporadic, non-uniform release of energy by the polymer molecules. This developed technique allows the examination of a viscoelastic liquid at the molecular level, and demonstrates the inhomogeneity of viscoelastic liquids as a result of molecular aggregation. It paves the way for a deeper understanding of viscoelastic turbulence, and could provide some insights on the high Weissenberg number problem. In addition, the technique may serve as a useful tool for the investigations of polymer drag reduction. PMID:26563615
Coactivation of response initiation processes with redundant signals.
Maslovat, Dana; Hajj, Joëlle; Carlsen, Anthony N
2018-05-14
During reaction time (RT) tasks, participants respond faster to multiple stimuli from different modalities as compared to a single stimulus, a phenomenon known as the redundant signal effect (RSE). Explanations for this effect typically include coactivation arising from the multiple stimuli, which results in enhanced processing of one or more response production stages. The current study compared empirical RT data with the predictions of a model in which initiation-related activation arising from each stimulus is additive. Participants performed a simple wrist extension RT task following either a visual go-signal, an auditory go-signal, or both stimuli with the auditory stimulus delayed between 0 and 125 ms relative to the visual stimulus. Results showed statistical equivalence between the predictions of an additive initiation model and the observed RT data, providing novel evidence that the RSE can be explained via a coactivation of initiation-related processes. It is speculated that activation summation occurs at the thalamus, leading to the observed facilitation of response initiation. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
1971-01-01
A case study of knowledge contributions from the crew life support aspect of the manned space program is reported. The new information needed to be learned, the solutions developed, and the relation of new knowledge gained to earthly problems were investigated. Illustrations are given in the following categories: supplying atmosphere for spacecraft; providing carbon dioxide removal and recycling; providing contaminant control and removal; maintaining the body's thermal balance; protecting against the space hazards of decompression, radiation, and meteorites; minimizing fire and blast hazards; providing adequate light and conditions for adequate visual performance; providing mobility and work physiology; and providing adequate habitability.
Comparing object recognition from binary and bipolar edge images for visual prostheses
Jung, Jae-Hyun; Pu, Tian; Peli, Eli
2017-01-01
Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition. PMID:28458481
Display size effects in visual search: analyses of reaction time distributions as mixtures.
Reynolds, Ann; Miller, Jeff
2009-05-01
In a reanalysis of data from Cousineau and Shiffrin (2004) and two new visual search experiments, we used a likelihood ratio test to examine the full distributions of reaction time (RT) for evidence that the display size effect is a mixture-type effect that occurs on only a proportion of trials, leaving RT in the remaining trials unaffected, as is predicted by serial self-terminating search models. Experiment 1 was a reanalysis of Cousineau and Shiffrin's data, for which a mixture effect had previously been established by a bimodal distribution of RTs, and the results confirmed that the likelihood ratio test could also detect this mixture. Experiment 2 applied the likelihood ratio test within a more standard visual search task with a relatively easy target/distractor discrimination, and Experiment 3 applied it within a target identification search task within the same types of stimuli. Neither of these experiments provided any evidence for the mixture-type display size effect predicted by serial self-terminating search models. Overall, these results suggest that serial self-terminating search models may generally be applicable only with relatively difficult target/distractor discriminations, and then only for some participants. In addition, they further illustrate the utility of analysing full RT distributions in addition to mean RT.
VisOHC: Designing Visual Analytics for Online Health Communities
Kwon, Bum Chul; Kim, Sung-Hee; Lee, Sukwon; Choo, Jaegul; Huh, Jina; Yi, Ji Soo
2015-01-01
Through online health communities (OHCs), patients and caregivers exchange their illness experiences and strategies for overcoming the illness, and provide emotional support. To facilitate healthy and lively conversations in these communities, their members should be continuously monitored and nurtured by OHC administrators. The main challenge of OHC administrators' tasks lies in understanding the diverse dimensions of conversation threads that lead to productive discussions in their communities. In this paper, we present a design study in which three domain expert groups participated, an OHC researcher and two OHC administrators of online health communities, which was conducted to find with a visual analytic solution. Through our design study, we characterized the domain goals of OHC administrators and derived tasks to achieve these goals. As a result of this study, we propose a system called VisOHC, which visualizes individual OHC conversation threads as collapsed boxes–a visual metaphor of conversation threads. In addition, we augmented the posters' reply authorship network with marks and/or beams to show conversation dynamics within threads. We also developed unique measures tailored to the characteristics of OHCs, which can be encoded for thread visualizations at the users' requests. Our observation of the two administrators while using VisOHC showed that it supports their tasks and reveals interesting insights into online health communities. Finally, we share our methodological lessons on probing visual designs together with domain experts by allowing them to freely encode measurements into visual variables. PMID:26529688
VisOHC: Designing Visual Analytics for Online Health Communities.
Kwon, Bum Chul; Kim, Sung-Hee; Lee, Sukwon; Choo, Jaegul; Huh, Jina; Yi, Ji Soo
2016-01-01
Through online health communities (OHCs), patients and caregivers exchange their illness experiences and strategies for overcoming the illness, and provide emotional support. To facilitate healthy and lively conversations in these communities, their members should be continuously monitored and nurtured by OHC administrators. The main challenge of OHC administrators' tasks lies in understanding the diverse dimensions of conversation threads that lead to productive discussions in their communities. In this paper, we present a design study in which three domain expert groups participated, an OHC researcher and two OHC administrators of online health communities, which was conducted to find with a visual analytic solution. Through our design study, we characterized the domain goals of OHC administrators and derived tasks to achieve these goals. As a result of this study, we propose a system called VisOHC, which visualizes individual OHC conversation threads as collapsed boxes-a visual metaphor of conversation threads. In addition, we augmented the posters' reply authorship network with marks and/or beams to show conversation dynamics within threads. We also developed unique measures tailored to the characteristics of OHCs, which can be encoded for thread visualizations at the users' requests. Our observation of the two administrators while using VisOHC showed that it supports their tasks and reveals interesting insights into online health communities. Finally, we share our methodological lessons on probing visual designs together with domain experts by allowing them to freely encode measurements into visual variables.
Visual Inspection Reliability for Precision Manufactured Parts.
See, Judi E
2015-12-01
Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. Visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied. Eighty-two inspectors in the U.S. Nuclear Security Enterprise inspected 140 parts for eight different defects. Inspectors correctly rejected 85% of defective items and incorrectly rejected 35% of acceptable parts. Use of a phased inspection approach based on inspector confidence ratings was not an effective or efficient technique to improve the overall accuracy of the process. Results did verify that inspection is a workload-intensive task, dominated by mental demand and effort. Hits for Nuclear Security Enterprise inspection were not vastly superior to the industry average of 80%, and they were achieved at the expense of a high scrap rate not typically observed during visual inspection tasks. This study provides the first empirical data to address the reliability of visual inspection for precision manufactured parts used in nuclear weapons. Results enhance current understanding of the process of visual inspection and can be applied to improve reliability for precision manufactured parts. © 2015, Human Factors and Ergonomics Society.
ViSimpl: Multi-View Visual Analysis of Brain Simulation Data
Galindo, Sergio E.; Toharia, Pablo; Robles, Oscar D.; Pastor, Luis
2016-01-01
After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures. PMID:27774062
ViSimpl: Multi-View Visual Analysis of Brain Simulation Data.
Galindo, Sergio E; Toharia, Pablo; Robles, Oscar D; Pastor, Luis
2016-01-01
After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures.
Direct manipulation of virtual objects
NASA Astrophysics Data System (ADS)
Nguyen, Long K.
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
Robust selectivity to two-object images in human visual cortex
Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel
2010-01-01
SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105
Denoising and 4D visualization of OCT images
Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.
2009-01-01
We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509
Visual discrimination predicts naming and semantic association accuracy in Alzheimer disease.
Harnish, Stacy M; Neils-Strunjas, Jean; Eliassen, James; Reilly, Jamie; Meinzer, Marcus; Clark, John Greer; Joseph, Jane
2010-12-01
Language impairment is a common symptom of Alzheimer disease (AD), and is thought to be related to semantic processing. This study examines the contribution of another process, namely visual perception, on measures of confrontation naming and semantic association abilities in persons with probable AD. Twenty individuals with probable mild-moderate Alzheimer disease and 20 age-matched controls completed a battery of neuropsychologic measures assessing visual perception, naming, and semantic association ability. Visual discrimination tasks that varied in the degree to which they likely accessed stored structural representations were used to gauge whether structural processing deficits could account for deficits in naming and in semantic association in AD. Visual discrimination abilities of nameable objects in AD strongly predicted performance on both picture naming and semantic association ability, but lacked the same predictive value for controls. Although impaired, performance on visual discrimination tests of abstract shapes and novel faces showed no significant relationship with picture naming and semantic association. These results provide additional evidence to support that structural processing deficits exist in AD, and may contribute to object recognition and naming deficits. Our findings suggest that there is a common deficit in discrimination of pictures using nameable objects, picture naming, and semantic association of pictures in AD. Disturbances in structural processing of pictured items may be associated with lexical-semantic impairment in AD, owing to degraded internal storage of structural knowledge.
Carrasco-Zevallos, O. M.; Keller, B.; Viehland, C.; Shen, L.; Waterman, G.; Todorich, B.; Shieh, C.; Hahn, P.; Farsiu, S.; Kuo, A. N.; Toth, C. A.; Izatt, J. A.
2016-01-01
Minimally-invasive microsurgery has resulted in improved outcomes for patients. However, operating through a microscope limits depth perception and fixes the visual perspective, which result in a steep learning curve to achieve microsurgical proficiency. We introduce a surgical imaging system employing four-dimensional (live volumetric imaging through time) microscope-integrated optical coherence tomography (4D MIOCT) capable of imaging at up to 10 volumes per second to visualize human microsurgery. A custom stereoscopic heads-up display provides real-time interactive volumetric feedback to the surgeon. We report that 4D MIOCT enhanced suturing accuracy and control of instrument positioning in mock surgical trials involving 17 ophthalmic surgeons. Additionally, 4D MIOCT imaging was performed in 48 human eye surgeries and was demonstrated to successfully visualize the pathology of interest in concordance with preoperative diagnosis in 93% of retinal surgeries and the surgical site of interest in 100% of anterior segment surgeries. In vivo 4D MIOCT imaging revealed sub-surface pathologic structures and instrument-induced lesions that were invisible through the operating microscope during standard surgical maneuvers. In select cases, 4D MIOCT guidance was necessary to resolve such lesions and prevent post-operative complications. Our novel surgical visualization platform achieves surgeon-interactive 4D visualization of live surgery which could expand the surgeon’s capabilities. PMID:27538478
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kristin A.; Scholtz, Jean; Whiting, Mark A.
The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less
NASA Astrophysics Data System (ADS)
Carrasco-Zevallos, O. M.; Keller, B.; Viehland, C.; Shen, L.; Waterman, G.; Todorich, B.; Shieh, C.; Hahn, P.; Farsiu, S.; Kuo, A. N.; Toth, C. A.; Izatt, J. A.
2016-08-01
Minimally-invasive microsurgery has resulted in improved outcomes for patients. However, operating through a microscope limits depth perception and fixes the visual perspective, which result in a steep learning curve to achieve microsurgical proficiency. We introduce a surgical imaging system employing four-dimensional (live volumetric imaging through time) microscope-integrated optical coherence tomography (4D MIOCT) capable of imaging at up to 10 volumes per second to visualize human microsurgery. A custom stereoscopic heads-up display provides real-time interactive volumetric feedback to the surgeon. We report that 4D MIOCT enhanced suturing accuracy and control of instrument positioning in mock surgical trials involving 17 ophthalmic surgeons. Additionally, 4D MIOCT imaging was performed in 48 human eye surgeries and was demonstrated to successfully visualize the pathology of interest in concordance with preoperative diagnosis in 93% of retinal surgeries and the surgical site of interest in 100% of anterior segment surgeries. In vivo 4D MIOCT imaging revealed sub-surface pathologic structures and instrument-induced lesions that were invisible through the operating microscope during standard surgical maneuvers. In select cases, 4D MIOCT guidance was necessary to resolve such lesions and prevent post-operative complications. Our novel surgical visualization platform achieves surgeon-interactive 4D visualization of live surgery which could expand the surgeon’s capabilities.
Li, W; Lai, T M; Bohon, C; Loo, S K; McCurdy, D; Strober, M; Bookheimer, S; Feusner, J
2015-07-01
Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities--event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI)--to test for abnormal activity associated with early visual signaling. We acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems. AN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces. Results provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.
COMICS: Cartoon Visualization of Omics Data in Spatial Context Using Anatomical Ontologies
2017-01-01
COMICS is an interactive and open-access web platform for integration and visualization of molecular expression data in anatomograms of zebrafish, carp, and mouse model systems. Anatomical ontologies are used to map omics data across experiments and between an experiment and a particular visualization in a data-dependent manner. COMICS is built on top of several existing resources. Zebrafish and mouse anatomical ontologies with their controlled vocabulary (CV) and defined hierarchy are used with the ontoCAT R package to aggregate data for comparison and visualization. Libraries from the QGIS geographical information system are used with the R packages “maps” and “maptools” to visualize and interact with molecular expression data in anatomical drawings of the model systems. COMICS allows users to upload their own data from omics experiments, using any gene or protein nomenclature they wish, as long as CV terms are used to define anatomical regions or developmental stages. Common nomenclatures such as the ZFIN gene names and UniProt accessions are provided additional support. COMICS can be used to generate publication-quality visualizations of gene and protein expression across experiments. Unlike previous tools that have used anatomical ontologies to interpret imaging data in several animal models, including zebrafish, COMICS is designed to take spatially resolved data generated by dissection or fractionation and display this data in visually clear anatomical representations rather than large data tables. COMICS is optimized for ease-of-use, with a minimalistic web interface and automatic selection of the appropriate visual representation depending on the input data. PMID:29083911
COMICS: Cartoon Visualization of Omics Data in Spatial Context Using Anatomical Ontologies.
Travin, Dmitrii; Popov, Iaroslav; Guler, Arzu Tugce; Medvedev, Dmitry; van der Plas-Duivesteijn, Suzanne; Varela, Monica; Kolder, Iris C R M; Meijer, Annemarie H; Spaink, Herman P; Palmblad, Magnus
2018-01-05
COMICS is an interactive and open-access web platform for integration and visualization of molecular expression data in anatomograms of zebrafish, carp, and mouse model systems. Anatomical ontologies are used to map omics data across experiments and between an experiment and a particular visualization in a data-dependent manner. COMICS is built on top of several existing resources. Zebrafish and mouse anatomical ontologies with their controlled vocabulary (CV) and defined hierarchy are used with the ontoCAT R package to aggregate data for comparison and visualization. Libraries from the QGIS geographical information system are used with the R packages "maps" and "maptools" to visualize and interact with molecular expression data in anatomical drawings of the model systems. COMICS allows users to upload their own data from omics experiments, using any gene or protein nomenclature they wish, as long as CV terms are used to define anatomical regions or developmental stages. Common nomenclatures such as the ZFIN gene names and UniProt accessions are provided additional support. COMICS can be used to generate publication-quality visualizations of gene and protein expression across experiments. Unlike previous tools that have used anatomical ontologies to interpret imaging data in several animal models, including zebrafish, COMICS is designed to take spatially resolved data generated by dissection or fractionation and display this data in visually clear anatomical representations rather than large data tables. COMICS is optimized for ease-of-use, with a minimalistic web interface and automatic selection of the appropriate visual representation depending on the input data.
Effects of kinesthetic and cutaneous stimulation during the learning of a viscous force field.
Rosati, Giulio; Oscari, Fabio; Pacchierotti, Claudio; Prattichizzo, Domenico
2014-01-01
Haptic stimulation can help humans learn perceptual motor skills, but the precise way in which it influences the learning process has not yet been clarified. This study investigates the role of the kinesthetic and cutaneous components of haptic feedback during the learning of a viscous curl field, taking also into account the influence of visual feedback. We present the results of an experiment in which 17 subjects were asked to make reaching movements while grasping a joystick and wearing a pair of cutaneous devices. Each device was able to provide cutaneous contact forces through a moving platform. The subjects received visual feedback about joystick's position. During the experiment, the system delivered a perturbation through (1) full haptic stimulation, (2) kinesthetic stimulation alone, (3) cutaneous stimulation alone, (4) altered visual feedback, or (5) altered visual feedback plus cutaneous stimulation. Conditions 1, 2, and 3 were also tested with the cancellation of the visual feedback of position error. Results indicate that kinesthetic stimuli played a primary role during motor adaptation to the viscous field, which is a fundamental premise to motor learning and rehabilitation. On the other hand, cutaneous stimulation alone appeared not to bring significant direct or adaptation effects, although it helped in reducing direct effects when used in addition to kinesthetic stimulation. The experimental conditions with visual cancellation of position error showed slower adaptation rates, indicating that visual feedback actively contributes to the formation of internal models. However, modest learning effects were detected when the visual information was used to render the viscous field.
Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin
2015-03-01
Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.
Does visual impairment lead to additional disability in adults with intellectual disabilities?
Evenhuis, H M; Sjoukes, L; Koot, H M; Kooijman, A C
2009-01-01
This study addresses the question to what extent visual impairment leads to additional disability in adults with intellectual disabilities (ID). In a multi-centre cross-sectional study of 269 adults with mild to profound ID, social and behavioural functioning was assessed with observant-based questionnaires, prior to expert assessment of visual function. With linear regression analysis the percentage of variance, explained by levels of visual function, was calculated for the total population and per ID level. A total of 107/269 participants were visually impaired or blind (WHO criteria). On top of the decrease by ID visual impairment significantly decreased daily living skills, communication & language, recognition/communication. Visual impairment did not cause more self-absorbed and withdrawn behaviour or anxiety. Peculiar looking habits correlated with visual impairment and not with ID. In the groups with moderate and severe ID this effect seems stronger than in the group with profound ID. Although ID alone impairs daily functioning, visual impairment diminishes the daily functioning even more. Timely detection and treatment or rehabilitation of visual impairment may positively influence daily functioning, language development, initiative and persistence, social skills, communication skills and insecure movement.
Yet More Lessons From Complexity. Unity the key for Peace.
NASA Astrophysics Data System (ADS)
Puente, C. E.
2004-12-01
The last few decades have witnessed the development of a host of ideas aimed at understanding and predicting nature's ever present complexity. It is shown that such a work provides, through its detailed study of order and disorder, a suitable framework for visualizing the dynamics and consequences of mankind's ever present divisive traits. Specifically, this work explains how recent universal results pertaining to power-laws, self-organized criticality and space-filling transformations provide additional and pertinent reminders that point us to unity as an essential element for us to achieve peace.
NASA Technical Reports Server (NTRS)
Peters, B. T.; Caldwell, E. E.; Batson, C. D.; Guined, J. R.; DeDios, Y. E.; Stepanyan, V.; Gadd, N. E.; Szecsy, D. L.; Mulavara, A. P.; Seidler, R. D.;
2014-01-01
Astronauts experience sensorimotor disturbances during the initial exposure to microgravity and during the readapation phase following a return to a gravitational environment. These alterations may lead to disruption in the ability to perform mission critical functions during and after these gravitational transitions. Astronauts show significant inter-subject variation in adaptive capability following gravitational transitions. The way each individual's brain synthesizes the available visual, vestibular and somatosensory information is likely the basis for much of the variation. Identifying the presence of biases in each person's use of information available from these sensorimotor subsystems and relating it to their ability to adapt to a novel locomotor task will allow us to customize a training program designed to enhance sensorimotor adaptability. Eight tests are being used to measure sensorimotor subsystem performance. Three of these use measures of body sway to characterize balance during varying sensorimotor challenges. The effect of vision is assessed by repeating conditions with eyes open and eyes closed. Standing on foam, or on a support surface that pitches to maintain a constant ankle angle provide somatosensory challenges. Information from the vestibular system is isolated when vision is removed and the support surface is compromised, and it is challenged when the tasks are done while the head is in motion. The integration and dominance of visual information is assessed in three additional tests. The Rod & Frame Test measures the degree to which a subject's perception of the visual vertical is affected by the orientation of a tilted frame in the periphery. Locomotor visual dependence is determined by assessing how much an oscillating virtual visual world affects a treadmill-walking subject. In the third of the visual manipulation tests, subjects walk an obstacle course while wearing up-down reversing prisms. The two remaining tests include direct measures of knee and ankle proprioception and a functional movement assessment that screens for movement restrictions and asymmetries. To assess each subject's locomotor adaptability subjects walk for twenty minutes on a treadmill that oscillates laterally at 0.3 Hz. Throughout the test metabolic cost provides a measure of exertion and step frequency provides a measure of stability. Additionally, at four points during the perturbation period, reaction time tests are used to probe changes in the amount of mental effort being used to perform the task. As with the adaptive capability observed in astronauts during gravitational transitions, our data shows significant variability between subjects. To aid in the analysis of the results, custom software tools have been developed to enhance in the visualization of the large number of output variables. Preliminary analyses of the data collected to date do not show a strong relationship between adaptability and any single predictor variable. Analysis continues to identify a multifactorial predictor outcome "signature" that do inform us of locomotor adaptability.
Kretz, Florian T A; Gerl, Matthias; Gerl, Ralf; Müller, Matthias; Auffarth, Gerd U
2015-12-01
To evaluate the clinical outcomes after cataract surgery with implantation of a new diffractive multifocal intraocular lens (IOL) with a lower near addition (+2.75 D.). 143 eyes of 85 patients aged between 40 years and 83 years that underwent cataract surgery with implantation of the multifocal IOL (MIOL) Tecnis ZKB00 (Abbott Medical Optics,Santa Ana, California, USA) were evaluated. Changes in uncorrected (uncorrected distance visual acuity, uncorrected intermediate visual acuity, uncorrected near visual acuity) and corrected (corrected distance visual acuity, corrected near visual acuity) logMAR distance, intermediate visual acuity and near visual acuity, as well as manifest refraction were evaluated during a 3-month follow-up. Additionally, patients were asked about photic phenomena and spectacle dependence. Postoperative spherical equivalent was within ±0.50 D and ±1.00 D of emmetropia in 78.1% and 98.4% of eyes, respectively. Postoperative mean monocular uncorrected distance visual acuity, uncorrected near visual acuity and uncorrected intermediate visual acuity was 0.20 LogMAR or better in 73.7%, 81.1% and 83.9% of eyes, respectively. All eyes achieved monocular corrected distance visual acuity of 0.30 LogMAR or better. A total of 100% of patients referred to be at least moderately happy with the outcomes of the surgery. Only 15.3% of patients required the use of spectacles for some daily activities postoperatively. The introduction of low add MIOLs follows a trend to increase intermediate visual acuity. In this study a near add of +2.75 D still reaches satisfying near results and leads to high patient satisfaction for intermediate visual acuity. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Selective transfer of visual working memory training on Chinese character learning.
Opitz, Bertram; Schneiders, Julia A; Krick, Christoph M; Mecklinger, Axel
2014-01-01
Previous research has shown a systematic relationship between phonological working memory capacity and second language proficiency for alphabetic languages. However, little is known about the impact of working memory processes on second language learning in a non-alphabetic language such as Mandarin Chinese. Due to the greater complexity of the Chinese writing system we expect that visual working memory rather than phonological working memory exerts a unique influence on learning Chinese characters. This issue was explored in the present experiment by comparing visual working memory training with an active (auditory working memory training) control condition and a passive, no training control condition. Training induced modulations in language-related brain networks were additionally examined using functional magnetic resonance imaging in a pretest-training-posttest design. As revealed by pre- to posttest comparisons and analyses of individual differences in working memory training gains, visual working memory training led to positive transfer effects on visual Chinese vocabulary learning compared to both control conditions. In addition, we found sustained activation after visual working memory training in the (predominantly visual) left infero-temporal cortex that was associated with behavioral transfer. In the control conditions, activation either increased (active control condition) or decreased (passive control condition) without reliable behavioral transfer effects. This suggests that visual working memory training leads to more efficient processing and more refined responses in brain regions involved in visual processing. Furthermore, visual working memory training boosted additional activation in the precuneus, presumably reflecting mental image generation of the learned characters. We, therefore, suggest that the conjoint activity of the mid-fusiform gyrus and the precuneus after visual working memory training reflects an interaction of working memory and imagery processes with complex visual stimuli that fosters the coherent synthesis of a percept from a complex visual input in service of enhanced Chinese character learning. © 2013 Published by Elsevier Ltd.
Zoeller, R T; Rovet, J
2004-10-01
Abstract The original concept of the critical period of thyroid hormone (TH) action on brain development was proposed to identify the postnatal period during which TH supplement must be provided to a child with congenital hypothyroidism to prevent mental retardation. As neuropsychological tools have become more sensitive, it has become apparent that even mild TH insufficiency in humans can produce measurable deficits in very specific neuropsychological functions, and that the specific consequences of TH deficiency depends on the precise developmental timing of the deficiency. Models of maternal hypothyroidism, hypothyroxinaemia and congenital hyperthyroidism have provided these insights. If the TH deficiency occurs early in pregnancy, the offspring display problems in visual attention, visual processing (i.e. acuity and strabismus) and gross motor skills. If it occurs later in pregnancy, children are at additional risk of subnormal visual (i.e. contrast sensitivity) and visuospatial skills, as well as slower response speeds and fine motor deficits. Finally, if TH insufficiency occurs after birth, language and memory skills are most predominantly affected. Although the experimental literature lags behind clinical studies in providing a mechanistic explanation for each of these observations, recent studies confirm that the specific action of TH on brain development depends upon developmental timing, and studies informing us about molecular mechanisms of TH action are generating hypotheses concerning possible mechanisms to account for these pleiotropic actions.
Dries, Daniel R; Dean, Diane M; Listenberger, Laura L; Novak, Walter R P; Franzen, Margaret A; Craig, Paul A
2017-01-02
A thorough understanding of the molecular biosciences requires the ability to visualize and manipulate molecules in order to interpret results or to generate hypotheses. While many instructors in biochemistry and molecular biology use visual representations, few indicate that they explicitly teach visual literacy. One reason is the need for a list of core content and competencies to guide a more deliberate instruction in visual literacy. We offer here the second stage in the development of one such resource for biomolecular three-dimensional visual literacy. We present this work with the goal of building a community for online resource development and use. In the first stage, overarching themes were identified and submitted to the biosciences community for comment: atomic geometry; alternate renderings; construction/annotation; het group recognition; molecular dynamics; molecular interactions; monomer recognition; symmetry/asymmetry recognition; structure-function relationships; structural model skepticism; and topology and connectivity. Herein, the overarching themes have been expanded to include a 12th theme (macromolecular assemblies), 27 learning goals, and more than 200 corresponding objectives, many of which cut across multiple overarching themes. The learning goals and objectives offered here provide educators with a framework on which to map the use of molecular visualization in their classrooms. In addition, the framework may also be used by biochemistry and molecular biology educators to identify gaps in coverage and drive the creation of new activities to improve visual literacy. This work represents the first attempt, to our knowledge, to catalog a comprehensive list of explicit learning goals and objectives in visual literacy. © 2016 by The International Union of Biochemistry and Molecular Biology, 45(1):69-75, 2017. © 2016 The Authors Biochemistry and Molecular Biology Education published by Wiley Periodicals, Inc. on behalf of International Union of Biochemistry and Molecular Biology.
Dries, Daniel R.; Dean, Diane M.; Listenberger, Laura L.; Novak, Walter R.P.
2016-01-01
Abstract A thorough understanding of the molecular biosciences requires the ability to visualize and manipulate molecules in order to interpret results or to generate hypotheses. While many instructors in biochemistry and molecular biology use visual representations, few indicate that they explicitly teach visual literacy. One reason is the need for a list of core content and competencies to guide a more deliberate instruction in visual literacy. We offer here the second stage in the development of one such resource for biomolecular three‐dimensional visual literacy. We present this work with the goal of building a community for online resource development and use. In the first stage, overarching themes were identified and submitted to the biosciences community for comment: atomic geometry; alternate renderings; construction/annotation; het group recognition; molecular dynamics; molecular interactions; monomer recognition; symmetry/asymmetry recognition; structure‐function relationships; structural model skepticism; and topology and connectivity. Herein, the overarching themes have been expanded to include a 12th theme (macromolecular assemblies), 27 learning goals, and more than 200 corresponding objectives, many of which cut across multiple overarching themes. The learning goals and objectives offered here provide educators with a framework on which to map the use of molecular visualization in their classrooms. In addition, the framework may also be used by biochemistry and molecular biology educators to identify gaps in coverage and drive the creation of new activities to improve visual literacy. This work represents the first attempt, to our knowledge, to catalog a comprehensive list of explicit learning goals and objectives in visual literacy. © 2016 by The International Union of Biochemistry and Molecular Biology, 45(1):69–75, 2017. PMID:27486685
Drummond, Sean P A; Anderson, Dane E; Straus, Laura D; Vogel, Edward K; Perez, Veronica B
2012-01-01
Sleep deprivation has adverse consequences for a variety of cognitive functions. The exact effects of sleep deprivation, though, are dependent upon the cognitive process examined. Within working memory, for example, some component processes are more vulnerable to sleep deprivation than others. Additionally, the differential impacts on cognition of different types of sleep deprivation have not been well studied. The aim of this study was to examine the effects of one night of total sleep deprivation and 4 nights of partial sleep deprivation (4 hours in bed/night) on two components of visual working memory: capacity and filtering efficiency. Forty-four healthy young adults were randomly assigned to one of the two sleep deprivation conditions. All participants were studied: 1) in a well-rested condition (following 6 nights of 9 hours in bed/night); and 2) following sleep deprivation, in a counter-balanced order. Visual working memory testing consisted of two related tasks. The first measured visual working memory capacity and the second measured the ability to ignore distractor stimuli in a visual scene (filtering efficiency). Results showed neither type of sleep deprivation reduced visual working memory capacity. Partial sleep deprivation also generally did not change filtering efficiency. Total sleep deprivation, on the other hand, did impair performance in the filtering task. These results suggest components of visual working memory are differentially vulnerable to the effects of sleep deprivation, and different types of sleep deprivation impact visual working memory to different degrees. Such findings have implications for operational settings where individuals may need to perform with inadequate sleep and whose jobs involve receiving an array of visual information and discriminating the relevant from the irrelevant prior to making decisions or taking actions (e.g., baggage screeners, air traffic controllers, military personnel, health care providers).
Integrated web visualizations for protein-protein interaction databases.
Jeanquartier, Fleur; Jean-Quartier, Claire; Holzinger, Andreas
2015-06-16
Understanding living systems is crucial for curing diseases. To achieve this task we have to understand biological networks based on protein-protein interactions. Bioinformatics has come up with a great amount of databases and tools that support analysts in exploring protein-protein interactions on an integrated level for knowledge discovery. They provide predictions and correlations, indicate possibilities for future experimental research and fill the gaps to complete the picture of biochemical processes. There are numerous and huge databases of protein-protein interactions used to gain insights into answering some of the many questions of systems biology. Many computational resources integrate interaction data with additional information on molecular background. However, the vast number of diverse Bioinformatics resources poses an obstacle to the goal of understanding. We present a survey of databases that enable the visual analysis of protein networks. We selected M=10 out of N=53 resources supporting visualization, and we tested against the following set of criteria: interoperability, data integration, quantity of possible interactions, data visualization quality and data coverage. The study reveals differences in usability, visualization features and quality as well as the quantity of interactions. StringDB is the recommended first choice. CPDB presents a comprehensive dataset and IntAct lets the user change the network layout. A comprehensive comparison table is available via web. The supplementary table can be accessed on http://tinyurl.com/PPI-DB-Comparison-2015. Only some web resources featuring graph visualization can be successfully applied to interactive visual analysis of protein-protein interaction. Study results underline the necessity for further enhancements of visualization integration in biochemical analysis tools. Identified challenges are data comprehensiveness, confidence, interactive feature and visualization maturing.
Intuitive Visualization of Transient Flow: Towards a Full 3D Tool
NASA Astrophysics Data System (ADS)
Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph
2015-04-01
Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool. Currently STRING can generate animations of single 2D cuts, either planar or curved surfaces, through 3D simulation domains. To provide a general tool for experts enabling also direct exploration and analysis of large 3D flow fields the software needs to be extended to intuitive as well as interactive visualizations of entire 3D flow domains. The current research concerning this project, which is funded by the Federal Ministry for Economic Affairs and Energy (Germany), is presented.
Visual Acuity Reporting in Clinical Research Publications.
Tsou, Brittany C; Bressler, Neil M
2017-06-01
Visual acuity results in publications typically are reported in Snellen or non-Snellen formats or both. A study in 2011 suggested that many ophthalmologists do not understand non-Snellen formats, such as logarithm of the Minimum Angle of Resolution (logMAR) or Early Treatment Diabetic Retinopathy Study (ETDRS) letter scores. As a result, some journals, since at least 2013, have instructed authors to provide approximate Snellen equivalents next to non-Snellen visual acuity values. To evaluate how authors currently report visual acuity and whether they provide Snellen equivalents when their reports include non-Snellen formats. From November 21, 2016, through December 14, 2016, one reviewer evaluated visual acuity reporting among all articles published in 4 ophthalmology clinical journals from November 2015 through October 2016, including 3 of 4 journals that instructed authors to provide Snellen equivalents for visual acuity reported in non-Snellen formats. Frequency of formats of visual acuity reporting and frequency of providing Snellen equivalents when non-Snellen formats are given. The 4 journals reviewed had the second, fourth, fifth, and ninth highest impact factors for ophthalmology journals in 2015. Of 1881 articles reviewed, 807 (42.9%) provided a visual acuity measurement. Of these, 396 (49.1%) used only a Snellen format; 411 (50.9%) used a non-Snellen format. Among those using a non-Snellen format, 145 (35.3%) provided a Snellen equivalent while 266 (64.7%) provided only a non-Snellen format. More than half of all articles in 4 ophthalmology clinical journals fail to provide a Snellen equivalent when visual acuity is not in a Snellen format. Since many US ophthalmologists may not comprehend non-Snellen formats easily, these data suggest that editors and publishing staff should encourage authors to provide Snellen equivalents whenever visual acuity data are reported in a non-Snellen format to improve ease of understanding visual acuity measurements.
SmartR: an open-source platform for interactive visual analytics for translational research data
Herzinger, Sascha; Gu, Wei; Satagopam, Venkata; Eifes, Serge; Rege, Kavita; Barbosa-Silva, Adriano; Schneider, Reinhard
2017-01-01
Abstract Summary: In translational research, efficient knowledge exchange between the different fields of expertise is crucial. An open platform that is capable of storing a multitude of data types such as clinical, pre-clinical or OMICS data combined with strong visual analytical capabilities will significantly accelerate the scientific progress by making data more accessible and hypothesis generation easier. The open data warehouse tranSMART is capable of storing a variety of data types and has a growing user community including both academic institutions and pharmaceutical companies. tranSMART, however, currently lacks interactive and dynamic visual analytics and does not permit any post-processing interaction or exploration. For this reason, we developed SmartR, a plugin for tranSMART, that equips the platform not only with several dynamic visual analytical workflows, but also provides its own framework for the addition of new custom workflows. Modern web technologies such as D3.js or AngularJS were used to build a set of standard visualizations that were heavily improved with dynamic elements. Availability and Implementation: The source code is licensed under the Apache 2.0 License and is freely available on GitHub: https://github.com/transmart/SmartR. Contact: reinhard.schneider@uni.lu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28334291
SmartR: an open-source platform for interactive visual analytics for translational research data.
Herzinger, Sascha; Gu, Wei; Satagopam, Venkata; Eifes, Serge; Rege, Kavita; Barbosa-Silva, Adriano; Schneider, Reinhard
2017-07-15
In translational research, efficient knowledge exchange between the different fields of expertise is crucial. An open platform that is capable of storing a multitude of data types such as clinical, pre-clinical or OMICS data combined with strong visual analytical capabilities will significantly accelerate the scientific progress by making data more accessible and hypothesis generation easier. The open data warehouse tranSMART is capable of storing a variety of data types and has a growing user community including both academic institutions and pharmaceutical companies. tranSMART, however, currently lacks interactive and dynamic visual analytics and does not permit any post-processing interaction or exploration. For this reason, we developed SmartR , a plugin for tranSMART, that equips the platform not only with several dynamic visual analytical workflows, but also provides its own framework for the addition of new custom workflows. Modern web technologies such as D3.js or AngularJS were used to build a set of standard visualizations that were heavily improved with dynamic elements. The source code is licensed under the Apache 2.0 License and is freely available on GitHub: https://github.com/transmart/SmartR . reinhard.schneider@uni.lu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kris A.; Scholtz, Jean; Whiting, Mark A.
The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less
Eye movements, visual search and scene memory, in an immersive virtual environment.
Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
NASA Astrophysics Data System (ADS)
Makarov, V.; Korelin, O.; Koblyakov, D.; Kostin, S.; Komandirov, A.
2018-02-01
The article is devoted to the development of the Advanced Driver Assistance Systems (ADAS) for the GAZelle NEXT car. This project is aimed at developing a visual information system for the driver integrated into the windshield racks. The developed system implements the following functions: assistance in maneuvering and parking; Recognition of road signs; Warning the driver about the possibility of a frontal collision; Control of "blind" zones; "Transparent" vision in the windshield racks, widening the field of view, behind them; Visual and sound information about the traffic situation; Control and descent from the lane of the vehicle; Monitoring of the driver’s condition; navigation system; All-round review. The scheme of action of sensors of the developed system of visual information of the driver is provided. The moments of systems on a prototype of a vehicle are considered. Possible changes in the interior and dashboard of the car are given. The results of the implementation are aimed at the implementation of the system - improved informing of the driver about the environment and the development of an ergonomic interior for this system within the new Functional Salon of the Gazelle Next vehicle equipped with a visual information system for the driver.
Douglass, John K; Wehling, Martin F
2016-12-01
A highly automated goniometer instrument (called FACETS) has been developed to facilitate rapid mapping of compound eye parameters for investigating regional visual field specializations. The instrument demonstrates the feasibility of analyzing the complete field of view of an insect eye in a fraction of the time required if using non-motorized, non-computerized methods. Faster eye mapping makes it practical for the first time to employ sample sizes appropriate for testing hypotheses about the visual significance of interspecific differences in regional specializations. Example maps of facet sizes are presented from four dipteran insects representing the Asilidae, Calliphoridae, and Stratiomyidae. These maps provide the first quantitative documentation of the frontal enlarged-facet zones (EFZs) that typify asilid eyes, which, together with the EFZs in male Calliphoridae, are likely to be correlated with high-spatial-resolution acute zones. The presence of EFZs contrasts sharply with the almost homogeneous distribution of facet sizes in the stratiomyid. Moreover, the shapes of EFZs differ among species, suggesting functional specializations that may reflect differences in visual ecology. Surveys of this nature can help identify species that should be targeted for additional studies, which will elucidate fundamental principles and constraints that govern visual field specializations and their evolution.
Constantinidou, Fofi; Evripidou, Christiana
2012-01-01
This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10-12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.
The Chinese American Eye Study: Design and Methods
Varma, Rohit; Hsu, Chunyi; Wang, Dandan; Torres, Mina; Azen, Stanley P.
2016-01-01
Purpose To summarize the study design, operational strategies and procedures of the Chinese American Eye Study (CHES), a population-based assessment of the prevalence of visual impairment, ocular disease, and visual functioning in Chinese Americans. Methods This population-based, cross-sectional study, included 4,570 Chinese, 50 years and older, residing in the city of Monterey Park, California. Each eligible participant completed a detailed interview and eye examination. The interview included an assessment of demographic, behavioral, and ocular risk factors and health-related and vision-related quality of life. The eye examination included measurements of visual acuity, intraocular pressure, visual fields, fundus and optic disc photography, a detailed anterior and posterior segment examination, and measurements of blood pressure, glycosylated hemoglobin levels, and blood glucose levels. Results The objectives of the CHES are to obtain prevalence estimates of visual impairment, refractive error, diabetic retinopathy, open-angle and angle-closure glaucoma, lens opacities, and age-related macular degeneration in Chinese-Americans. In addition, outcomes include effect estimates for risk factors associated with eye diseases. Lastly, CHES will investigate the genetic determinates of myopia and glaucoma. Conclusion The CHES will provide information about the prevalence and risk factors of ocular diseases in one of the fastest growing minority groups in the United States. PMID:24044409
An evaluation of unisensory and multisensory adaptive flight-path navigation displays
NASA Astrophysics Data System (ADS)
Moroney, Brian W.
1999-11-01
The present study assessed the use of unimodal (auditory or visual) and multimodal (audio-visual) adaptive interfaces to aid military pilots in the performance of a precision-navigation flight task when they were confronted with additional information-processing loads. A standard navigation interface was supplemented by adaptive interfaces consisting of either a head-up display based flight director, a 3D virtual audio interface, or a combination of the two. The adaptive interfaces provided information about how to return to the pathway when off course. Using an advanced flight simulator, pilots attempted two navigation scenarios: (A) maintain proper course under normal flight conditions and (B) return to course after their aircraft's position has been perturbed. Pilots flew in the presence or absence of an additional information-processing task presented in either the visual or auditory modality. The additional information-processing tasks were equated in terms of perceived mental workload as indexed by the NASA-TLX. Twelve experienced military pilots (11 men and 1 woman), naive to the purpose of the experiment, participated in the study. They were recruited from Wright-Patterson Air Force Base and had a mean of 2812 hrs. of flight experience. Four navigational interface configurations, the standard visual navigation interface alone (SV), SV plus adaptive visual, SV plus adaptive auditory, and SV plus adaptive visual-auditory composite were combined factorially with three concurrent tasks (CT), the no CT, the visual CT, and the auditory CT, a completely repeated measures design. The adaptive navigation displays were activated whenever the aircraft was more than 450 ft off course. In the normal flight scenario, the adaptive interfaces did not bolster navigation performance in comparison to the standard interface. It is conceivable that the pilots performed quite adequately using the familiar generic interface under normal flight conditions and hence showed no added benefit of the adaptive interfaces. In the return-to-course scenario, the relative advantages of the three adaptive interfaces were dependent upon the nature of the CT in a complex way. In the absence of a CT, recovery heading performance was superior with the adaptive visual and adaptive composite interfaces compared to the adaptive auditory interface. In the context of a visual CT, recovery when using the adaptive composite interface was superior to that when using the adaptive visual interface. Post-experimental inquiry indicated that when faced with a visual CT, the pilots used the auditory component of the multimodal guidance display to detect gross heading errors and the visual component to make more fine-grained heading adjustments. In the context of the auditory CT, navigation performance using the adaptive visual interface tended to be superior to that when using the adaptive auditory interface. Neither CT performance nor NASA-TLX workload level was influenced differentially by the interface configurations. Thus, the potential benefits associated with the proposed interfaces appear to be unaccompanied by negative side effects involving CT interference and workload. The adaptive interface configurations were altered without any direct input from the pilot. Thus, it was feared that pilots might reject the activation of interfaces independent of their control. However, pilots' debriefing comments about the efficacy of the adaptive interface approach were very positive. (Abstract shortened by UMI.)
SPOCS: software for predicting and visualizing orthology/paralogy relationships among genomes.
Curtis, Darren S; Phillips, Aaron R; Callister, Stephen J; Conlan, Sean; McCue, Lee Ann
2013-10-15
At the rate that prokaryotic genomes can now be generated, comparative genomics studies require a flexible method for quickly and accurately predicting orthologs among the rapidly changing set of genomes available. SPOCS implements a graph-based ortholog prediction method to generate a simple tab-delimited table of orthologs and in addition, html files that provide a visualization of the predicted ortholog/paralog relationships to which gene/protein expression metadata may be overlaid. A SPOCS web application is freely available at http://cbb.pnnl.gov/portal/tools/spocs.html. Source code for Linux systems is also freely available under an open source license at http://cbb.pnnl.gov/portal/software/spocs.html; the Boost C++ libraries and BLAST are required.
Determining the interparticle force laws in amorphous solids from a visual image.
Gendelman, Oleg; Pollack, Yoav G; Procaccia, Itamar
2016-06-01
We consider the problem of how to determine the force laws in an amorphous system of interacting particles. Given the positions of the centers of mass of the constituent particles we propose an algorithm to determine the interparticle force laws. Having n different types of constituents we determine the coefficients in the Laurent polynomials for the n(n+1)/2 possibly different force laws. A visual providing the particle positions in addition to a measurement of the pressure is all that is required. The algorithm proposed includes a part that can correct for experimental errors in the positions of the particles. Such a correction of unavoidable measurement errors is expected to benefit many experiments in the field.
Early-20th-century visual observations of M13 variable stars
NASA Astrophysics Data System (ADS)
Osborn, W.; Barnard, E. E.
2016-08-01
In 1900 E. E. Barnard published 37 visual observations of Variable 2 (V2) in the globular clustter M13 made in 1899 and 1900. A review of Barnard's notebooks revealed he made many additional brightness estimates up to 1911, and he had also recorded the variations of V1 starting in 1904. These data provide the earliest-epoch light curves for these stars and thus are useful for studying their period changes. This paper presents Barnard's observations of the M13 variables along with their derived heliocentric Julian Dates and approximate V magnitudes. These include 231 unpublished observations of V2 and 94 of V1. How these data will be of value for determing period changes by these stars is described.
Angeles-Han, Sheila T.; Rabinovich, C. Egla
2016-01-01
Purpose of review This review provides updates on novel risk markers for the development of pediatric inflammatory uveitis and a severe disease course, on treatment of refractory disease, and on the measurement of visual outcomes. Recent findings There are several new genetic markers, biomarkers and clinical factors that may influence a child’s uveitis disease course. It is important to identify children at risk for poor visual outcomes and who are refractory to traditional therapy. Racial disparities have recently been reported. We describe agents of potential benefit. In addition, we discuss the importance of patient reported outcomes in this population. Summary Uveitis can lead to vision threatening complications. Timely and aggressive treatment of children identified to be at risk for a severe uveitis course may lead to improved outcomes. PMID:27328333
Parallax visualization of full motion video using the Pursuer GUI
NASA Astrophysics Data System (ADS)
Mayhew, Christopher A.; Forgues, Mark B.
2014-06-01
In 2013, the Authors reported to the SPIE on the Phase 1 development of a Parallax Visualization (PV) plug-in toolset for Wide Area Motion Imaging (WAMI) data using the Pursuer Graphical User Interface (GUI).1 In addition to the ability to PV WAMI data, the Phase 1 plug-in toolset also featured a limited ability to visualize Full Motion video (FMV) data. The ability to visualize both WAMI and FMV data is highly advantageous capability for an Electric Light Table (ELT) toolset. This paper reports on the Phase 2 development and addition of a full featured FMV capability to the Pursuer WAMI PV Plug-in.
Atcherson, Samuel R; Mendel, Lisa Lucks; Baltimore, Wesley J; Patro, Chhayakanta; Lee, Sungmin; Pousson, Monique; Spann, M Joshua
2017-01-01
It is generally well known that speech perception is often improved with integrated audiovisual input whether in quiet or in noise. In many health-care environments, however, conventional surgical masks block visual access to the mouth and obscure other potential facial cues. In addition, these environments can be noisy. Although these masks may not alter the acoustic properties, the presence of noise in addition to the lack of visual input can have a deleterious effect on speech understanding. A transparent ("see-through") surgical mask may help to overcome this issue. To compare the effect of noise and various visual input conditions on speech understanding for listeners with normal hearing (NH) and hearing impairment using different surgical masks. Participants were assigned to one of three groups based on hearing sensitivity in this quasi-experimental, cross-sectional study. A total of 31 adults participated in this study: one talker, ten listeners with NH, ten listeners with moderate sensorineural hearing loss, and ten listeners with severe-to-profound hearing loss. Selected lists from the Connected Speech Test were digitally recorded with and without surgical masks and then presented to the listeners at 65 dB HL in five conditions against a background of four-talker babble (+10 dB SNR): without a mask (auditory only), without a mask (auditory and visual), with a transparent mask (auditory only), with a transparent mask (auditory and visual), and with a paper mask (auditory only). A significant difference was found in the spectral analyses of the speech stimuli with and without the masks; however, no more than ∼2 dB root mean square. Listeners with NH performed consistently well across all conditions. Both groups of listeners with hearing impairment benefitted from visual input from the transparent mask. The magnitude of improvement in speech perception in noise was greatest for the severe-to-profound group. Findings confirm improved speech perception performance in noise for listeners with hearing impairment when visual input is provided using a transparent surgical mask. Most importantly, the use of the transparent mask did not negatively affect speech perception performance in noise. American Academy of Audiology
NASA Astrophysics Data System (ADS)
Allen, Emily Christine
Mental models for scientific learning are often defined as, "cognitive tools situated between experiments and theories" (Duschl & Grandy, 2012). In learning, these cognitive tools are used to not only take in new information, but to help problem solve in new contexts. Nancy Nersessian (2008) describes a mental model as being "[loosely] characterized as a representation of a system with interactive parts with representations of those interactions. Models can be qualitative, quantitative, and/or simulative (mental, physical, computational)" (p. 63). If conceptual parts used by the students in science education are inaccurate, then the resulting model will not be useful. Students in college general chemistry courses are presented with multiple abstract topics and often struggle to fit these parts into complete models. This is especially true for topics that are founded on quantum concepts, such as atomic structure and molecular bonding taught in college general chemistry. The objectives of this study were focused on how students use visual tools introduced during instruction to reason with atomic and molecular structure, what misconceptions may be associated with these visual tools, and how visual modeling skills may be taught to support students' use of visual tools for reasoning. The research questions for this study follow from Gilbert's (2008) theory that experts use multiple representations when reasoning and modeling a system, and Kozma and Russell's (2005) theory of representational competence levels. This study finds that as students developed greater command of their understanding of abstract quantum concepts, they spontaneously provided additional representations to describe their more sophisticated models of atomic and molecular structure during interviews. This suggests that when visual modeling with multiple representations is taught, along with the limitations of the representations, it can assist students in the development of models for reasoning about abstract topics such as atomic and molecular structure. There is further gain if students' difficulties with these representations are targeted through the use additional instruction such as a workbook that requires the students to exercise their visual modeling skills.
NASA Astrophysics Data System (ADS)
McDougall, C.; McLaughlin, J.
2008-12-01
NOAA has developed several programs aimed at facilitating the use of earth system science data and data visualizations by formal and informal educators. One of them, Science On a Sphere, a visualization display tool and system that uses networked LCD projectors to display animated global datasets onto the outside of a suspended, 1.7-meter diameter opaque sphere, enables science centers, museums, and universities to display real-time and current earth system science data. NOAA's Office of Education has provided grants to such education institutions to develop exhibits featuring Science On a Sphere (SOS) and create content for and evaluate audience impact. Currently, 20 public education institutions have permanent Science On a Sphere exhibits and 6 more will be installed soon. These institutions and others that are working to create and evaluate content for this system work collaboratively as a network to improve our collective knowledge about how to create educationally effective visualizations. Network members include other federal agencies, such as, NASA and the Dept. of Energy, and major museums such as Smithsonian and American Museum of Natural History, as well as a variety of mid-sized and small museums and universities. Although the audiences in these institutions vary widely in their scientific awareness and understanding, we find there are misconceptions and lack of familiarity with viewing visualizations that are common among the audiences. Through evaluations performed in these institutions we continue to evolve our understanding of how to create content that is understandable by those with minimal scientific literacy. The findings from our network will be presented including the importance of providing context, real-world connections and imagery to accompany the visualizations and the need for audience orientation before the visualizations are viewed. Additionally, we will review the publicly accessible virtual library housing over 200 datasets for SOS and any other real or virtual globe. These datasets represent contributions from NOAA, NASA, Dept. of Energy, and the public institutions that are displaying the spheres.
ERIC Educational Resources Information Center
Braden, Roberts A., Ed.; And Others
These proceedings contain 37 papers from 51 authors noted for their expertise in the field of visual literacy. The collection is divided into three sections: (1) "Examining Visual Literacy" (including, in addition to a 7-year International Visual Literacy Association bibliography covering the period from 1983-1989, papers on the perception of…
Evaluation of Different Power of Near Addition in Two Different Multifocal Intraocular Lenses
Unsal, Ugur; Baser, Gonen
2016-01-01
Purpose. To compare near, intermediate, and distance vision and quality of vision, when refractive rotational multifocal intraocular lenses with 3.0 diopters or diffractive multifocal intraocular lenses with 2.5 diopters near addition are implanted. Methods. 41 eyes of 41 patients in whom rotational +3.0 diopters near addition IOLs were implanted and 30 eyes of 30 patients in whom diffractive +2.5 diopters near addition IOLs were implanted after cataract surgery were reviewed. Uncorrected and corrected distance visual acuity, intermediate visual acuity, near visual acuity, and patient satisfaction were evaluated 6 months later. Results. The corrected and uncorrected distance visual acuity were the same between both groups (p = 0.50 and p = 0.509, resp.). The uncorrected intermediate and corrected intermediate and near vision acuities were better in the +2.5 near vision added intraocular lens implanted group (p = 0.049, p = 0.005, and p = 0.001, resp.) and the uncorrected near vision acuity was better in the +3.0 near vision added intraocular lens implanted group (p = 0.001). The patient satisfactions of both groups were similar. Conclusion. The +2.5 diopters near addition could be a better choice in younger patients with more distance and intermediate visual requirements (driving, outdoor activities), whereas the + 3.0 diopters should be considered for patients with more near vision correction (reading). PMID:27340560
ERIC Educational Resources Information Center
Laakso, Mikko-Jussi; Myller, Niko; Korhonen, Ari
2009-01-01
In this paper, two emerging learning and teaching methods have been studied: collaboration in concert with algorithm visualization. When visualizations have been employed in collaborative learning, collaboration introduces new challenges for the visualization tools. In addition, new theories are needed to guide the development and research of the…
Introducing GHOST: The Geospace/Heliosphere Observation & Simulation Tool-kit
NASA Astrophysics Data System (ADS)
Murphy, J. J.; Elkington, S. R.; Schmitt, P.; Wiltberger, M. J.; Baker, D. N.
2013-12-01
Simulation models of the heliospheric and geospace environments can provide key insights into the geoeffective potential of solar disturbances such as Coronal Mass Ejections and High Speed Solar Wind Streams. Advanced post processing of the results of these simulations greatly enhances the utility of these models for scientists and other researchers. Currently, no supported centralized tool exists for performing these processing tasks. With GHOST, we introduce a toolkit for the ParaView visualization environment that provides a centralized suite of tools suited for Space Physics post processing. Building on the work from the Center For Integrated Space Weather Modeling (CISM) Knowledge Transfer group, GHOST is an open-source tool suite for ParaView. The tool-kit plugin currently provides tools for reading LFM and Enlil data sets, and provides automated tools for data comparison with NASA's CDAweb database. As work progresses, many additional tools will be added and through open-source collaboration, we hope to add readers for additional model types, as well as any additional tools deemed necessary by the scientific public. The ultimate end goal of this work is to provide a complete Sun-to-Earth model analysis toolset.
Development of Micro-Scale Assays of Mammary Stem and Progenitor Cells
2008-07-01
visualization via phase contrast along the length of the channel. Additionally, most devices can be placed on any substrate, allowing glass to be...microenvironment composition due to increases in surface area to volume ratios as the scale of the culture is reduced. Purcell provided a very useful account...cultures are performed in polysytrene (or glass bottomed) tissue culture flasks, dishes and plates. While many microfluidic cultures are performed
Receptive Fields and the Reconstruction of Visual Information.
1985-09-01
depending on the noise . Thus our model would suggest that the interpolation filters for deblurring are playing a role in Ii hyperacuity. This is novel...of additional precision in the information can be obtained by a process of deblurring , which could be relevant to hyperacuity. It also provides an... impulse of heat diffuses into increasingly larger Gaussian distributions as time proceeds. Mathematically, let f(x) denote the initial temperature
[Sound improves distinction of low intensities of light in the visual cortex of a rabbit].
Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V
2011-01-01
Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.
NASA Astrophysics Data System (ADS)
Kuznetsova, M. M.; Liu, Y. H.; Rastaetter, L.; Pembroke, A. D.; Chen, L. J.; Hesse, M.; Glocer, A.; Komar, C. M.; Dorelli, J.; Roytershteyn, V.
2016-12-01
The presentation will provide overview of new tools, services and models implemented at the Community Coordinated Modeling Center (CCMC) to facilitate MMS dayside results analysis. We will provide updates on implementation of Particle-in-Cell (PIC) simulations at the CCMC and opportunities for on-line visualization and analysis of results of PIC simulations of asymmetric magnetic reconnection for different guide fields and boundary conditions. Fields, plasma parameters, particle distribution moments as well as particle distribution functions calculated in selected regions of the vicinity of reconnection sites can be analyzed through the web-based interactive visualization system. In addition there are options to request distribution functions in user selected regions of interest and to fly through simulated magnetic reconnection configurations and a map of distributions to facilitate comparisons with observations. A broad collection of global magnetosphere models hosted at the CCMC provide opportunity to put MMS observations and local PIC simulations into global context. We recently implemented the RECON-X post processing tool (Glocer et al, 2016) which allows users to determine the location of separator surface around closed field lines and between open field lines and solar wind field lines. The tool also finds the separatrix line where the two surfaces touch and positions of magnetic nulls. The surfaces and the separatrix line can be visualized relative to satellite positions in the dayside magnetosphere using an interactive HTML-5 visualization for each time step processed. To validate global magnetosphere models' capability to simulate locations of dayside magnetosphere boundaries we will analyze the proximity of MMS to simulated separatrix locations for a set of MMS diffusion region crossing events.
Visual exploration and analysis of human-robot interaction rules
NASA Astrophysics Data System (ADS)
Zhang, Hui; Boyles, Michael J.
2013-01-01
We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.
Fluctuation scaling in the visual cortex at threshold
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2016-05-01
Fluctuation scaling relates trial-to-trial variability to the average response by a power function in many physical processes. Here we address whether fluctuation scaling holds in sensory psychophysics and its functional role in visual processing. We report experimental evidence of fluctuation scaling in human color vision and form perception at threshold. Subjects detected thresholds in a psychophysical masking experiment that is considered a standard reference for studying suppression between neurons in the visual cortex. For all subjects, the analysis of threshold variability that results from the masking task indicates that fluctuation scaling is a global property that modulates detection thresholds with a scaling exponent that departs from 2, β =2.48 ±0.07 . We also examine a generalized version of fluctuation scaling between the sample kurtosis K and the sample skewness S of threshold distributions. We find that K and S are related and follow a unique quadratic form K =(1.19 ±0.04 ) S2+(2.68 ±0.06 ) that departs from the expected 4/3 power function regime. A random multiplicative process with weak additive noise is proposed based on a Langevin-type equation. The multiplicative process provides a unifying description of fluctuation scaling and the quadratic S -K relation and is related to on-off intermittency in sensory perception. Our findings provide an insight into how the human visual system interacts with the external environment. The theoretical methods open perspectives for investigating fluctuation scaling and intermittency effects in a wide variety of natural, economic, and cognitive phenomena.
Allen, Christopher P. G.; Dunkley, Benjamin T.; Muthukumaraswamy, Suresh D.; Edden, Richard; Evans, C. John; Sumner, Petroc; Singh, Krish D.; Chambers, Christopher D.
2014-01-01
This series of experiments investigated the neural basis of conscious vision in humans using a form of transcranial magnetic stimulation (TMS) known as continuous theta burst stimulation (cTBS). Previous studies have shown that occipital TMS, when time-locked to the onset of visual stimuli, can induce a phenomenon analogous to blindsight in which conscious detection is impaired while the ability to discriminate ‘unseen’ stimuli is preserved above chance. Here we sought to reproduce this phenomenon using offline occipital cTBS, which has been shown to induce an inhibitory cortical aftereffect lasting 45–60 minutes. Contrary to expectations, our first experiment revealed the opposite effect: cTBS enhanced conscious vision relative to a sham control. We then sought to replicate this cTBS-induced potentiation of consciousness in conjunction with magnetoencephalography (MEG) and undertook additional experiments to assess its relationship to visual cortical excitability and levels of the inhibitory neurotransmitter γ-aminobutyric acid (GABA; via magnetic resonance spectroscopy, MRS). Occipital cTBS decreased cortical excitability and increased regional GABA concentration. No significant effects of cTBS on MEG measures were observed, although the results provided weak evidence for potentiation of event related desynchronisation in the β band. Collectively these experiments suggest that, through the suppression of noise, cTBS can increase the signal-to-noise ratio of neural activity underlying conscious vision. We speculate that gating-by-inhibition in the visual cortex may provide a key foundation of consciousness. PMID:24956195
NASA Astrophysics Data System (ADS)
Fujii, Kenji
2002-06-01
In this dissertation, the correlation mechanism in modeling the process in the visual perception is introduced. It has been well described that the correlation mechanism is effective for describing subjective attributes in auditory perception. The main result is that it is possible to apply the correlation mechanism to the process in temporal vision and spatial vision, as well as in audition. (1) The psychophysical experiment was performed on subjective flicker rates for complex waveforms. A remarkable result is that the phenomenon of missing fundamental is found in temporal vision as analogous to the auditory pitch perception. This implies the existence of correlation mechanism in visual system. (2) For spatial vision, the autocorrelation analysis provides useful measures for describing three primary perceptual properties of visual texture: contrast, coarseness, and regularity. Another experiment showed that the degree of regularity is a salient cue for texture preference judgment. (3) In addition, the autocorrelation function (ACF) and inter-aural cross-correlation function (IACF) were applied for analysis of the temporal and spatial properties of environmental noise. It was confirmed that the acoustical properties of aircraft noise and traffic noise are well described. These analyses provided useful parameters extracted from the ACF and IACF in assessing the subjective annoyance for noise. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Junko Atagi, 6813 Mosonou, Saijo-cho, Higashi-Hiroshima 739-0024, Japan. E-mail address: atagi\\@urban.ne.jp.
The Ocean Observatories Initiative: Data Access and Visualization via the Graphical User Interface
NASA Astrophysics Data System (ADS)
Garzio, L. M.; Belabbassi, L.; Knuth, F.; Smith, M. J.; Crowley, M. F.; Vardaro, M.; Kerfoot, J.
2016-02-01
The Ocean Observatories Initiative (OOI), funded by the National Science Foundation, is a broad-scale, multidisciplinary effort to transform oceanographic research by providing users with real-time access to long-term datasets from a variety of deployed physical, chemical, biological, and geological sensors. The global array component of the OOI includes four high latitude sites: Irminger Sea off Greenland, Station Papa in the Gulf of Alaska, Argentine Basin off the coast of Argentina, and Southern Ocean near coordinates 55°S and 90°W. Each site is composed of fixed moorings, hybrid profiler moorings and mobile assets, with a total of approximately 110 instruments at each site. Near real-time (telemetered) and recovered data from these instruments can be visualized and downloaded via the OOI Graphical User Interface. In this Interface, the user can visualize scientific parameters via six different plotting functions with options to specify time ranges and apply various QA/QC tests. Data streams from all instruments can also be downloaded in different formats (CSV, JSON, and NetCDF) for further data processing, visualization, and comparison to supplementary datasets. In addition, users can view alerts and alarms in the system, access relevant metadata and deployment information for specific instruments, and find infrastructure specifics for each array including location, sampling strategies, deployment schedules, and technical drawings. These datasets from the OOI provide an unprecedented opportunity to transform oceanographic research and education, and will be readily accessible to the general public via the OOI's Graphical User Interface.
Smith, Philip L; Sewell, David K; Lilburn, Simon D
2015-11-01
Normalization models of visual sensitivity assume that the response of a visual mechanism is scaled divisively by the sum of the activity in the excitatory and inhibitory mechanisms in its neighborhood. Normalization models of attention assume that the weighting of excitatory and inhibitory mechanisms is modulated by attention. Such models have provided explanations of the effects of attention in both behavioral and single-cell recording studies. We show how normalization models can be obtained as the asymptotic solutions of shunting differential equations, in which stimulus inputs and the activity in the mechanism control growth rates multiplicatively rather than additively. The value of the shunting equation approach is that it characterizes the entire time course of the response, not just its asymptotic strength. We describe two models of attention based on shunting dynamics, the integrated system model of Smith and Ratcliff (2009) and the competitive interaction theory of Smith and Sewell (2013). These models assume that attention, stimulus salience, and the observer's strategy for the task jointly determine the selection of stimuli into visual short-term memory (VSTM) and the way in which stimulus representations are weighted. The quality of the VSTM representation determines the speed and accuracy of the decision. The models provide a unified account of a variety of attentional phenomena found in psychophysical tasks using single-element and multi-element displays. Our results show the generality and utility of the normalization approach to modeling attention. Copyright © 2014 Elsevier B.V. All rights reserved.
A results-based process for evaluation of diverse visual analytics tools
NASA Astrophysics Data System (ADS)
Rubin, Gary; Berger, David H.
2013-05-01
With the pervasiveness of still and full-motion imagery in commercial and military applications, the need to ingest and analyze these media has grown rapidly in recent years. Additionally, video hosting and live camera websites provide a near real-time view of our changing world with unprecedented spatial coverage. To take advantage of these controlled and crowd-sourced opportunities, sophisticated visual analytics (VA) tools are required to accurately and efficiently convert raw imagery into usable information. Whether investing in VA products or evaluating algorithms for potential development, it is important for stakeholders to understand the capabilities and limitations of visual analytics tools. Visual analytics algorithms are being applied to problems related to Intelligence, Surveillance, and Reconnaissance (ISR), facility security, and public safety monitoring, to name a few. The diversity of requirements means that a onesize- fits-all approach to performance assessment will not work. We present a process for evaluating the efficacy of algorithms in real-world conditions, thereby allowing users and developers of video analytics software to understand software capabilities and identify potential shortcomings. The results-based approach described in this paper uses an analysis of end-user requirements and Concept of Operations (CONOPS) to define Measures of Effectiveness (MOEs), test data requirements, and evaluation strategies. We define metrics that individually do not fully characterize a system, but when used together, are a powerful way to reveal both strengths and weaknesses. We provide examples of data products, such as heatmaps, performance maps, detection timelines, and rank-based probability-of-detection curves.
NASA Astrophysics Data System (ADS)
Suzuki, Koji; Fujita, Motohiro; Matsuura, Kazuma; Fukuzono, Kazuyuki
This paper evaluates the adjustment process for crossing support system for visually disabled at signalized intersections with the use of pedestrian traffic signals in concert with visible light communication (VLC) technology through outdoor experiments. As for the experiments, we put a blindfold on sighted people by eye mask in order to analyze the behavior of acquired visually disabled. And we used a full-scale crosswalk which is taking into consideration the crossing slope, the bumps at the edge of a crosswalk between the roadway and the sidewalkand crosswalk line. From the results of the survey, it is found that repetitive use of the VLC system decreased the number of lost their bearings completely and ended up standing immobile and reduced the crossing time for each person. On the other hand, it is shown that the performance of our VLC system is nearly equal to the existing support system from the view point of crossing time and the number of standing immobile and we clarified the effect factor for guidance accuracy by the regression analyses. Then we broke test subjects down into patterns by cluster analysis, and explained the walking characteristics for each group as they used the VLC system. In addition, we conducted the additional surveys for the quasi-blind subjects who had difficulty walking by using VLC system and visually impaired users. As a result, it is revealed that guidance accuracy was improved by providing the information about their receiving movement at several points on crosswalk and the habit of their walks for each user.
Evidence for two attentional components in visual working memory.
Allen, Richard J; Baddeley, Alan D; Hitch, Graham J
2014-11-01
How does executive attentional control contribute to memory for sequences of visual objects, and what does this reveal about storage and processing in working memory? Three experiments examined the impact of a concurrent executive load (backward counting) on memory for sequences of individually presented visual objects. Experiments 1 and 2 found disruptive concurrent load effects of equivalent magnitude on memory for shapes, colors, and colored shape conjunctions (as measured by single-probe recognition). These effects were present only for Items 1 and 2 in a 3-item sequence; the final item was always impervious to this disruption. This pattern of findings was precisely replicated in Experiment 3 when using a cued verbal recall measure of shape-color binding, with error analysis providing additional insights concerning attention-related loss of early-sequence items. These findings indicate an important role for executive processes in maintaining representations of earlier encountered stimuli in an active form alongside privileged storage of the most recent stimulus. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Data visualization and analysis tools for the MAVEN mission
NASA Astrophysics Data System (ADS)
Harter, B.; De Wolfe, A. W.; Putnam, B.; Brain, D.; Chaffin, M.
2016-12-01
The Mars Atmospheric and Volatile Evolution (MAVEN) mission has been collecting data at Mars since September 2014. We have developed new software tools for exploring and analyzing the science data. Our open-source Python toolkit for working with data from MAVEN and other missions is based on the widely-used "tplot" IDL toolkit. We have replicated all of the basic tplot functionality in Python, and use the bokeh and matplotlib libraries to generate interactive line plots and spectrograms, providing additional functionality beyond the capabilities of IDL graphics. These Python tools are generalized to work with missions beyond MAVEN, and our software is available on Github. We have also been exploring 3D graphics as a way to better visualize the MAVEN science data and models. We have constructed a 3D visualization of MAVEN's orbit using the CesiumJS library, which not only allows viewing of MAVEN's orientation and position, but also allows the display of selected science data sets and their variation over time.
Keratoprosthesis in Ectodermal Dysplasia.
Wozniak, Rachel A F; Gonzalez, Mithra; Aquavella, James V
2016-07-01
To describe the complex surgical management and novel medical approach for a keratoprosthesis (KPro Boston type I) in a monocular, 73-year-old patient with ectodermal dysplasia and chronic, noninfectious corneal necrosis. Best-corrected visual acuity (BCVA) was measured with Snellen letters. Surgical intervention included an amniotic membrane graft, complete replacement of the KPro, conjunctival flap graft, corneal donor tissue grafts combined with inferior rectus muscle advancement, periosteal tissue graft, tarso-conjunctival flap construction, and symblepharolysis. Infliximab was used as a medical adjunctive therapy. Initial KPro placement provided a BCVA of 20/25 and long-term stability. Subsequent chronic melting at the optic border necessitated numerous surgeries to prevent extrusion and failure. Ultimate fistulization was addressed with the formation of a surgical pocket. The addition of infliximab promoted ocular surface stability, and the patient has maintained a BCVA of 20/80. Ectodermal dysplasia can result in eyelid and corneal abnormalities, requiring a KPro for visual restoration. In the setting of chronic, sterile corneal melt, novel surgical approaches and the off-label use of infliximab allowed for visual rehabilitation.
Toyz: A framework for scientific analysis of large datasets and astronomical images
NASA Astrophysics Data System (ADS)
Moolekamp, F.; Mamajek, E.
2015-11-01
As the size of images and data products derived from astronomical data continues to increase, new tools are needed to visualize and interact with that data in a meaningful way. Motivated by our own astronomical images taken with the Dark Energy Camera (DECam) we present Toyz, an open source Python package for viewing and analyzing images and data stored on a remote server or cluster. Users connect to the Toyz web application via a web browser, making it a convenient tool for students to visualize and interact with astronomical data without having to install any software on their local machines. In addition it provides researchers with an easy-to-use tool that allows them to browse the files on a server and quickly view very large images (>2 Gb) taken with DECam and other cameras with a large FOV and create their own visualization tools that can be added on as extensions to the default Toyz framework.
Sieger, Tomáš; Serranová, Tereza; Růžička, Filip; Vostatek, Pavel; Wild, Jiří; Štastná, Daniela; Bonnet, Cecilia; Novák, Daniel; Růžička, Evžen; Urgošík, Dušan; Jech, Robert
2015-03-10
Both animal studies and studies using deep brain stimulation in humans have demonstrated the involvement of the subthalamic nucleus (STN) in motivational and emotional processes; however, participation of this nucleus in processing human emotion has not been investigated directly at the single-neuron level. We analyzed the relationship between the neuronal firing from intraoperative microrecordings from the STN during affective picture presentation in patients with Parkinson's disease (PD) and the affective ratings of emotional valence and arousal performed subsequently. We observed that 17% of neurons responded to emotional valence and arousal of visual stimuli according to individual ratings. The activity of some neurons was related to emotional valence, whereas different neurons responded to arousal. In addition, 14% of neurons responded to visual stimuli. Our results suggest the existence of neurons involved in processing or transmission of visual and emotional information in the human STN, and provide evidence of separate processing of the affective dimensions of valence and arousal at the level of single neurons as well.
Chasing the negawatt: visualization for sustainable living.
Bartram, Lyn; Rodgers, Johnny; Muise, Kevin
2010-01-01
Energy and resource management is an important and growing research area at the intersection of conservation, sustainable design, alternative energy production, and social behavior. Energy consumption can be significantly reduced by simply changing how occupants inhabit and use buildings, with little or no additional costs. Reflecting this fact, an emerging measure of grid energy capacity is the negawatt: a unit of power saved by increasing efficiency or reducing consumption.Visualization clearly has an important role in enabling residents to understand and manage their energy use. This role is tied to providing real-time feedback of energy use, which encourages people to conserve energy.The challenge is to understand not only what kinds of visualizations are most effective but also where and how they fit into a larger information system to help residents make informed decisions. In this article, we also examine the effective display of home energy-use data using a net-zero solar-powered home (North House) and the Adaptive Living Interface System (ALIS), North House's information backbone.
Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display
NASA Astrophysics Data System (ADS)
Mun, Sungchul; Park, Min-Chul
2014-06-01
3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.
The Dorsal Visual System Predicts Future and Remembers Past Eye Position
Morris, Adam P.; Bremmer, Frank; Krekelberg, Bart
2016-01-01
Eye movements are essential to primate vision but introduce potentially disruptive displacements of the retinal image. To maintain stable vision, the brain is thought to rely on neurons that carry both visual signals and information about the current direction of gaze in their firing rates. We have shown previously that these neurons provide an accurate representation of eye position during fixation, but whether they are updated fast enough during saccadic eye movements to support real-time vision remains controversial. Here we show that not only do these neurons carry a fast and accurate eye-position signal, but also that they support in parallel a range of time-lagged variants, including predictive and post dictive signals. We recorded extracellular activity in four areas of the macaque dorsal visual cortex during a saccade task, including the lateral and ventral intraparietal areas (LIP, VIP), and the middle temporal (MT) and medial superior temporal (MST) areas. As reported previously, neurons showed tonic eye-position-related activity during fixation. In addition, they showed a variety of transient changes in activity around the time of saccades, including relative suppression, enhancement, and pre-saccadic bursts for one saccade direction over another. We show that a hypothetical neuron that pools this rich population activity through a weighted sum can produce an output that mimics the true spatiotemporal dynamics of the eye. Further, with different pooling weights, this downstream eye position signal (EPS) could be updated long before (<100 ms) or after (<200 ms) an eye movement. The results suggest a flexible coding scheme in which downstream computations have access to past, current, and future eye positions simultaneously, providing a basis for visual stability and delay-free visually-guided behavior. PMID:26941617
Girard, Erin E; Al-Ahmad, Amin; Rosenberg, Jarrett; Luong, Richard; Moore, Teri; Lauritsch, Günter; Chan, Frandics; Lee, David P.; Fahrig, Rebecca
2014-01-01
Objectives Cardiac C-arm CT uses a standard C-arm fluoroscopy system rotating around the patient to provide CT-like images during interventional procedures without moving the patient to a conventional CT scanner. We hypothesize that C-arm computed tomography (CT) can be used to visualize and quantify the size of perfusion defects and late enhancement resulting from a myocardial infarction (MI) using contrast enhanced techniques similar to previous CT and magnetic resonance imaging studies. Materials and Methods A balloon occlusion followed by reperfusion in a coronary artery was used to study acute and subacute MI in 12 swine. ECG-gated C-arm CT images were acquired the day of infarct creation (n=6) or 4 weeks after infarct creation (n = 6). Images were acquired immediately following contrast injection, then at 1 minute, and every 5 minutes up to 30 minutes with no additional contrast. The volume of the infarct as measured on C-arm CT was compared against pathology. Results The volume of acute MI, visualized as a combined region of hyperenhancement with a hypoenhanced core, correlated well with pathologic staining (concordance correlation = 0.89, p<0.0001, mean difference = 0.67±2.98 cm3). The volume of subacute MI, visualized as a region of hyperenhancement, correlated well with pathologic staining at imaging times 5–15 minutes following contrast injection (concordance correlation = 0.82, p<.001, mean difference = −0.64±1.94 cm3). Conclusions C-arm CT visualization of acute and subacute myocardial infarction is possible in a porcine model but improvement in the imaging technique is important before clinical use. Visualization of MI in the catheterization lab may be possible and could provide 3D images for guidance during interventional procedures. PMID:25635589
Ultrasound Images of the Tongue: A Tutorial for Assessment and Remediation of Speech Sound Errors.
Preston, Jonathan L; McAllister Byun, Tara; Boyce, Suzanne E; Hamilton, Sarah; Tiede, Mark; Phillips, Emily; Rivera-Campos, Ahmed; Whalen, Douglas H
2017-01-03
Diagnostic ultrasound imaging has been a common tool in medical practice for several decades. It provides a safe and effective method for imaging structures internal to the body. There has been a recent increase in the use of ultrasound technology to visualize the shape and movements of the tongue during speech, both in typical speakers and in clinical populations. Ultrasound imaging of speech has greatly expanded our understanding of how sounds articulated with the tongue (lingual sounds) are produced. Such information can be particularly valuable for speech-language pathologists. Among other advantages, ultrasound images can be used during speech therapy to provide (1) illustrative models of typical (i.e. "correct") tongue configurations for speech sounds, and (2) a source of insight into the articulatory nature of deviant productions. The images can also be used as an additional source of feedback for clinical populations learning to distinguish their better productions from their incorrect productions, en route to establishing more effective articulatory habits. Ultrasound feedback is increasingly used by scientists and clinicians as both the expertise of the users increases and as the expense of the equipment declines. In this tutorial, procedures are presented for collecting ultrasound images of the tongue in a clinical context. We illustrate these procedures in an extended example featuring one common error sound, American English /r/. Images of correct and distorted /r/ are used to demonstrate (1) how to interpret ultrasound images, (2) how to assess tongue shape during production of speech sounds, (3), how to categorize tongue shape errors, and (4), how to provide visual feedback to elicit a more appropriate and functional tongue shape. We present a sample protocol for using real-time ultrasound images of the tongue for visual feedback to remediate speech sound errors. Additionally, example data are shown to illustrate outcomes with the procedure.
Planetary SUrface Portal (PSUP): a tool for easy visualization and analysis of Martian surface
NASA Astrophysics Data System (ADS)
Poulet, Francois; Quantin-Nataf, Cathy; Ballans, Hervé; Lozac'h, Loic; Audouard, Joachim; Carter, John; Dassas, karin; Malapert, Jean-Christophe; Marmo, Chiara; Poulleau, Gilles; Riu, Lucie; Séjourné, antoine
2016-10-01
PSUP is two software application platforms for working with raster, vector, DTM, and hyper-spectral data acquired by various space instruments analyzing the surface of Mars from orbit. The first platform of PSUP is MarsSI (Martian surface data processing Information System, http://emars.univ-lyon1.fr). It provides data analysis functionalities to select and download ready-to-use products or to process data though specific and validated pipelines. To date, MarsSI handles CTX, HiRISE and CRISM data of NASA/MRO mission, HRSC and OMEGA data of ESA/MEx mission and THEMIS data of NASA/ODY mission (Lozac'h et al., EPSC 2015). The second part of PSUP is also open to the scientific community and can be visited at http://psup.ias.u-psud.fr/. This web-based user interface provides access to many data products for Mars: image footprints and rasters from the MarsSI tool; compositional maps from OMEGA and TES; albedo and thermal inertia from OMEGA and TES; mosaics from THEMIS, Viking, and CTX; high level specific products (defined as catalogues) such as hydrated mineral sites derived from CRISM and OMEGA data, central peaks mineralogy,… In addition, OMEGA C channel data cubes corrected for atmospheric and aerosol contributions can be downloaded. The architecture of PSUP data management and visualization is based on SITools2 and MIZAR, two CNES generic tools developed by a joint effort between CNES and scientific laboratories. SITools2 provides a self-manageable data access layer deployed on the PSUP data, while MIZAR is 3D application in a browser for discovering and visualizing geospatial data. Further developments including the addition of high level products of Mars (regional geological maps, new global compositional maps,…) are foreseen. Ultimately, PSUP will be adapted to other planetary surfaces and space missions in which the French research institutes are involved.
A survey of visualization systems for network security.
Shiravi, Hadi; Shiravi, Ali; Ghorbani, Ali A
2012-08-01
Security Visualization is a very young term. It expresses the idea that common visualization techniques have been designed for use cases that are not supportive of security-related data, demanding novel techniques fine tuned for the purpose of thorough analysis. Significant amount of work has been published in this area, but little work has been done to study this emerging visualization discipline. We offer a comprehensive review of network security visualization and provide a taxonomy in the form of five use-case classes encompassing nearly all recent works in this area. We outline the incorporated visualization techniques and data sources and provide an informative table to display our findings. From the analysis of these systems, we examine issues and concerns regarding network security visualization and provide guidelines and directions for future researchers and visual system developers.
Mohebbi, Saleh; Andrade, José; Nolte, Lena; Meyer, Heiko; Heisterkamp, Alexander; Majdani, Omid
2017-01-01
The present study focuses on the application of scanning laser optical tomography (SLOT) for visualization of anatomical structures inside the human cochlea ex vivo. SLOT is a laser-based highly efficient microscopy technique which allows for tomographic imaging of the internal structure of transparent specimens. Thus, in the field of otology this technique is best convenient for an ex vivo study of the inner ear anatomy. For this purpose, the preparation before imaging comprises decalcification, dehydration as well as optical clearing of the cochlea samples in toto. Here, we demonstrate results of SLOT imaging visualizing hard and soft tissue structures with an optical resolution of down to 15 μm using extinction and autofluorescence as contrast mechanisms. Furthermore, the internal structure can be analyzed nondestructively and quantitatively in detail by sectioning of the three-dimensional datasets. The method of X-ray Micro Computed Tomography (μCT) has been previously applied to explanted cochlea and is solely based on absorption contrast. An advantage of SLOT is that it uses visible light for image formation and thus provides a variety of contrast mechanisms known from other light microscopy techniques, such as fluorescence or scattering. We show that SLOT data is consistent with μCT anatomical data and provides additional information by using fluorescence. We demonstrate that SLOT is applicable for cochlea with metallic cochlear implants (CI) that would lead to significant artifacts in μCT imaging. In conclusion, the present study demonstrates the capability of SLOT for resolution visualization of cleared human cochleae ex vivo using multiple contrast mechanisms and lays the foundation for a broad variety of additional studies. PMID:28873437
FastProject: a tool for low-dimensional analysis of single-cell RNA-Seq data.
DeTomaso, David; Yosef, Nir
2016-08-23
A key challenge in the emerging field of single-cell RNA-Seq is to characterize phenotypic diversity between cells and visualize this information in an informative manner. A common technique when dealing with high-dimensional data is to project the data to 2 or 3 dimensions for visualization. However, there are a variety of methods to achieve this result and once projected, it can be difficult to ascribe biological significance to the observed features. Additionally, when analyzing single-cell data, the relationship between cells can be obscured by technical confounders such as variable gene capture rates. To aid in the analysis and interpretation of single-cell RNA-Seq data, we have developed FastProject, a software tool which analyzes a gene expression matrix and produces a dynamic output report in which two-dimensional projections of the data can be explored. Annotated gene sets (referred to as gene 'signatures') are incorporated so that features in the projections can be understood in relation to the biological processes they might represent. FastProject provides a novel method of scoring each cell against a gene signature so as to minimize the effect of missed transcripts as well as a method to rank signature-projection pairings so that meaningful associations can be quickly identified. Additionally, FastProject is written with a modular architecture and designed to serve as a platform for incorporating and comparing new projection methods and gene selection algorithms. Here we present FastProject, a software package for two-dimensional visualization of single cell data, which utilizes a plethora of projection methods and provides a way to systematically investigate the biological relevance of these low dimensional representations by incorporating domain knowledge.
Kavrut Ozturk, Nilgun; Kavakli, Ali Sait; Sagdic, Kadir; Inanoglu, Kerem; Umot Ayoglu, Raif
2018-04-01
Although the cervical plexus block generally provides adequate analgesia for carotid endarterectomy, pain caused by metal retractors on the inferior surface of the mandible is not prevented by the cervical block. Different pain relief methods can be performed for patients who experience discomfort in these areas. In this study, the authors evaluated the effect of mandibular block in addition to cervical plexus block on pain scores in carotid endarterectomy. A prospective, randomized, controlled trial. Training and research hospital. Patients who underwent a carotid endarterectomy. Patients scheduled for carotid endarterectomy under cervical plexus block were randomized into 2 groups: group 1 (those who did not receive a mandibular block) and group 2 (those who received a mandibular block). The main purpose of the study was to evaluate the mandibular block in addition to cervical plexus block in terms of intraoperative pain scores. Intraoperative visual analog scale scores were significantly higher in group 1 (p = 0.001). The amounts of supplemental 1% lidocaine and intraoperative intravenous analgesic used were significantly higher in group 1 (p = 0.001 and p = 0.035, respectively). Patient satisfaction scores were significantly lower in group 1 (p = 0.044). The amount of postoperative analgesic used, time to first analgesic requirement, postoperative visual analog scale scores, and surgeon satisfaction scores were similar in both groups. There was no significant difference between the groups with respect to complications. No major neurologic deficits or perioperative mortality were observed. Mandibular block in addition to cervical plexus block provides better intraoperative pain control and greater patient satisfaction than cervical plexus block alone. Copyright © 2017 Elsevier Inc. All rights reserved.
Physical Models that Provide Guidance in Visualization Deconstruction in an Inorganic Context
ERIC Educational Resources Information Center
Schiltz, Holly K.; Oliver-Hoyo, Maria T.
2012-01-01
Three physical model systems have been developed to help students deconstruct the visualization needed when learning symmetry and group theory. The systems provide students with physical and visual frames of reference to facilitate the complex visualization involved in symmetry concepts. The permanent reflection plane demonstration presents an…
Haripriya, Aravind; Tan, Colin S H; Venkatesh, Rengaraj; Aravind, Srinivasan; Dev, Anand; Au Eong, Kah-Guan
2011-05-01
To determine whether preoperative counseling on possible intraoperative visual perceptions during cataract surgery helps reduce the patients' fear during surgery. Aravind Eye Hospital, Madurai, India. Randomized masked clinical trial. Patients having phacoemulsification under topical anesthesia were randomized to receive additional preoperative counseling or no additional preoperative counseling on potential intraoperative visual perceptions. After surgery, all patients were interviewed about their intraoperative experiences. Of 851 patients, 558 (65.6%) received additional preoperative counseling and 293 (34.4%) received no additional counseling. A lower proportion of patients in the counseled group were frightened than in the group not counseled for visual sensation (4.5% versus 10.6%, P<.001). Analyzed separately by specific visual sensations, similar results were found for light perception (7/558 [1.3%] versus 13/293 [4.4%], P=.007), colors (P=.001), and movement (P=.020). The mean fear score was significantly lower in the counseled group than in the not-counseled group for light perception (0.03 versus 0.12, P=.002), colors (P=.001), movement (P=.005), and flashes (P=.035). Preoperative counseling was a significant factor affecting fear after accounting for age, sex, operated eye, and duration of surgery (multivariate odds ratio, 4.3; 95% confidence interval, 1.6-11.6; P=.003). Preoperative counseling on possible visual sensations during cataract surgery under topical anesthesia significantly reduced the mean fear score and the proportion of patients reporting being frightened. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Spectrum simulation in DTSA-II.
Ritchie, Nicholas W M
2009-10-01
Spectrum simulation is a useful practical and pedagogical tool. Particularly with complex samples or trace constituents, a simulation can help to understand the limits of the technique and the instrument parameters for the optimal measurement. DTSA-II, software for electron probe microanalysis, provides both easy to use and flexible tools for simulating common and less common sample geometries and materials. Analytical models based on (rhoz) curves provide quick simulations of simple samples. Monte Carlo models based on electron and X-ray transport provide more sophisticated models of arbitrarily complex samples. DTSA-II provides a broad range of simulation tools in a framework with many different interchangeable physical models. In addition, DTSA-II provides tools for visualizing, comparing, manipulating, and quantifying simulated and measured spectra.
Lang, Andreas; Dolek, Matthias; Theißen, Bernhard; Zapp, Andreas
2011-01-01
Butterflies and moths (Lepidoptera) have been suggested for the environmental monitoring of genetically modified (GM) crops due to their suitability as ecological indicators, and because of the possible adverse impact of the cultivation of current transgenic crops. The German Association of Engineers (VDI) has developed guidelines for the standardized monitoring of Lepidoptera describing the use of light traps for adult moths, transect counts for adult butterflies, and visual search for larvae. The guidelines suggest recording adults of Crambid Snout Moths during transect counts in addition to butterflies, and present detailed protocols for the visual search of larvae. In a field survey in three regions of Germany, we tested the practicability and effort-benefit ratio of the latter two VDI approaches. Crambid Snout Moths turned out to be suitable and practical indicators, which can easily be recorded during transect counts. They were present in 57% of the studied field margins, contributing a substantial part to the overall Lepidoptera count, thus providing valuable additional information to the monitoring results. Visual search of larvae generated results in an adequate effort-benefit ratio when searching for lepidopteran larvae of common species feeding on nettles. Visual search for larvae living on host plants other than nettles was time-consuming and yielded much lower numbers of recorded larvae. Beating samples of bushes and trees yielded a higher number of species and individuals. This method is especially appropriate when hedgerows are sampled, and was judged to perform intermediate concerning the relationship between invested sampling effort and obtained results for lepidopteran larvae. In conclusion, transect counts of adult Crambid Moths and recording of lepidopteran larvae feeding on nettles are feasible additional modules for an environmental monitoring of GM crops. Monitoring larvae living on host plants other than nettles and beating samples of bushes and trees can be used as a supplementary tool if necessary or desired. PMID:26467735
McDowell, Jennifer E.; Dyckman, Kara A.; Austin, Benjamin; Clementz, Brett A.
2008-01-01
This review provides a summary of the contributions made by human functional neuroimaging studies to the understanding of neural correlates of saccadic control. The generation of simple visually-guided saccades (redirections of gaze to a visual stimulus or prosaccades) and more complex volitional saccades require similar basic neural circuitry with additional neural regions supporting requisite higher level processes. The saccadic system has been studied extensively in non-human primates (e.g. single unit recordings) and humans (e.g. lesions and neuroimaging). Considerable knowledge of this system’s functional neuroanatomy makes it useful for investigating models of cognitive control. The network involved in prosaccade generation (by definition exogenously-driven) includes subcortical (striatum, thalamus, superior colliculus, and cerebellar vermis) and cortical structures (primary visual, extrastriate, and parietal cortices, and frontal and supplementary eye fields). Activation in these regions is also observed during endogenously-driven voluntary saccades (e.g. antisaccades, ocular motor delayed response or memory saccades, predictive tracking tasks and anticipatory saccades, and saccade sequencing), all of which require complex cognitive processes like inhibition and working memory. These additional requirements are supported by changes in neural activity in basic saccade circuitry and by recruitment of additional neural regions (such as prefrontal and anterior cingulate cortices). Activity in visual cortex is modulated as a function of task demands and may predict the type of saccade to be generated, perhaps via top-down control mechanisms. Neuroimaging studies suggest two foci of activation within FEF - medial and lateral - which may correspond to volitional and reflexive demands, respectively. Future research on saccade control could usefully (i) delineate important anatomical subdivisions that underlie functional differences, (ii) evaluate functional connectivity of anatomical regions supporting saccade generation using methods such as ICA and structural equation modeling, (iii) investigate how context affects behavior and brain activity, and (iv) use multi-modal neuroimaging to maximize spatial and temporal resolution. PMID:18835656
Visualization of bioelectric phenomena.
Palmer, T C; Simpson, E V; Kavanagh, K M; Smith, W M
1992-01-01
Biomedical investigators are currently able to acquire and analyze physiological and anatomical data from three-dimensional structures in the body. Often, multiple kinds of data can be recorded simultaneously. The usefulness of this information, either for exploratory viewing or for presentation to others, is limited by the lack of techniques to display it in intuitive, accessible formats. Unfortunately, the complexity of scientific visualization techniques and the inflexibility of commercial packages deter investigators from using sophisticated visualization methods that could provide them added insight into the mechanisms of the phenomena under study. Also, the sheer volume of such data is a problem. High-performance computing resources are often required for storage and processing, in addition to visualization. This chapter describes a novel, language-based interface that allows scientists with basic programming skills to classify and render multivariate volumetric data with a modest investment in software training. The interface facilitates data exploration by enabling experimentation with various algorithms to compute opacity and color from volumetric data. The value of the system is demonstrated using data from cardiac mapping studies, in which multiple electrodes are placed in an on the heart to measure the cardiac electrical activity intrinsic to the heart and its response to external stimulation.
Stereo depth and the control of locomotive heading
NASA Astrophysics Data System (ADS)
Rushton, Simon K.; Harris, Julie M.
1998-04-01
Does the addition of stereoscopic depth aid steering--the perceptual control of locomotor heading--around an environment? This is a critical question when designing a tele-operation or Virtual Environment system, with implications for computational resources and visual comfort. We examined the role of stereoscopic depth in the perceptual control of heading by employing an active steering task. Three conditions were tested: stereoscopic depth; incorrect stereoscopic depth and no stereoscopic depth. Results suggest that stereoscopic depth does not improve performance in a visual control task. A further set of experiments examined the importance of a ground plane. As a ground plane is a common feature of all natural environments and provides a pictorial depth cue, it has been suggested that the visual system may be especially attuned to exploit its presence. Thus it would be predicted that a ground plane would aid judgments of locomotor heading. Results suggest that the presence of rich motion information in the lower visual field produces significant performance advantages and that provision of such information may prove a better target for system resources than stereoscopic depth. These findings have practical consequences for a system designer and also challenge previous theoretical and psychophysical perceptual research.
NASA Technical Reports Server (NTRS)
DiZio, P.; Lackner, J. R.
2000-01-01
Reaching movements made to visual targets in a rotating room are initially deviated in path and endpoint in the direction of transient Coriolis forces generated by the motion of the arm relative to the rotating environment. With additional reaches, movements become progressively straighter and more accurate. Such adaptation can occur even in the absence of visual feedback about movement progression or terminus. Here we examined whether congenitally blind and sighted subjects without visual feedback would demonstrate adaptation to Coriolis forces when they pointed to a haptically specified target location. Subjects were tested pre-, per-, and postrotation at 10 rpm counterclockwise. Reaching to straight ahead targets prerotation, both groups exhibited slightly curved paths. Per-rotation, both groups showed large initial deviations of movement path and curvature but within 12 reaches on average had returned to prerotation curvature levels and endpoints. Postrotation, both groups showed mirror image patterns of curvature and endpoint to the per-rotation pattern. The groups did not differ significantly on any of the performance measures. These results provide compelling evidence that motor adaptation to Coriolis perturbations can be achieved on the basis of proprioceptive, somatosensory, and motor information in the complete absence of visual experience.
Subramani, Suresh; Kalpana, Raja; Monickaraj, Pankaj Moses; Natarajan, Jeyakumar
2015-04-01
The knowledge on protein-protein interactions (PPI) and their related pathways are equally important to understand the biological functions of the living cell. Such information on human proteins is highly desirable to understand the mechanism of several diseases such as cancer, diabetes, and Alzheimer's disease. Because much of that information is buried in biomedical literature, an automated text mining system for visualizing human PPI and pathways is highly desirable. In this paper, we present HPIminer, a text mining system for visualizing human protein interactions and pathways from biomedical literature. HPIminer extracts human PPI information and PPI pairs from biomedical literature, and visualize their associated interactions, networks and pathways using two curated databases HPRD and KEGG. To our knowledge, HPIminer is the first system to build interaction networks from literature as well as curated databases. Further, the new interactions mined only from literature and not reported earlier in databases are highlighted as new. A comparative study with other similar tools shows that the resultant network is more informative and provides additional information on interacting proteins and their associated networks. Copyright © 2015 Elsevier Inc. All rights reserved.
Aberrant patterns of visual facial information usage in schizophrenia.
Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M
2013-05-01
Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association
Depth reversals in stereoscopic displays driven by apparent size
NASA Astrophysics Data System (ADS)
Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.
1998-04-01
In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.