NASA Technical Reports Server (NTRS)
1985-01-01
A visual alert system resulted from circuitry developed by Applied Cybernetics Systems for Langley as part of a space related telemetry system. James Campman, Applied Cybernetics president, left the company and founded Grace Industries, Inc. to manufacture security devices based on the Langley technology. His visual alert system combines visual and audible alerts for hearing impaired people. The company also manufactures an arson detection device called the electronic nose, and is currently researching additional applications of the NASA technology.
DOT National Transportation Integrated Search
2012-03-01
This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...
Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin
2016-02-01
Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination.
Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin
2016-01-01
Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination. PMID:26829898
He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe
2013-01-01
It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.
Combined Feature Based and Shape Based Visual Tracker for Robot Navigation
NASA Technical Reports Server (NTRS)
Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.
2005-01-01
We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.
Protein-Protein Interaction Network and Gene Ontology
NASA Astrophysics Data System (ADS)
Choi, Yunkyu; Kim, Seok; Yi, Gwan-Su; Park, Jinah
Evolution of computer technologies makes it possible to access a large amount and various kinds of biological data via internet such as DNA sequences, proteomics data and information discovered about them. It is expected that the combination of various data could help researchers find further knowledge about them. Roles of a visualization system are to invoke human abilities to integrate information and to recognize certain patterns in the data. Thus, when the various kinds of data are examined and analyzed manually, an effective visualization system is an essential part. One instance of these integrated visualizations can be combination of protein-protein interaction (PPI) data and Gene Ontology (GO) which could help enhance the analysis of PPI network. We introduce a simple but comprehensive visualization system that integrates GO and PPI data where GO and PPI graphs are visualized side-by-side and supports quick reference functions between them. Furthermore, the proposed system provides several interactive visualization methods for efficiently analyzing the PPI network and GO directedacyclic- graph such as context-based browsing and common ancestors finding.
The Application of Current User Interface Technology to Interactive Wargaming Systems.
1987-09-01
components is essential to the Macintosh interface. Apple states that "Consistent visual communication is very powerful in delivering complex messages...interface. A visual interface uses visual objects as the basis of communication. "A visual communication object is some combination S. of text and...graphics used for communication under a system of inter- pretation, or visual language." The benefit of visual communication is V 45 "When humans are faced
Video content parsing based on combined audio and visual information
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-08-01
While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.
VIPER: Virtual Intelligent Planetary Exploration Rover
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Flueckiger, Lorenzo; Nguyen, Laurent; Washington, Richard
2001-01-01
Simulation and visualization of rover behavior are critical capabilities for scientists and rover operators to construct, test, and validate plans for commanding a remote rover. The VIPER system links these capabilities. using a high-fidelity virtual-reality (VR) environment. a kinematically accurate simulator, and a flexible plan executive to allow users to simulate and visualize possible execution outcomes of a plan under development. This work is part of a larger vision of a science-centered rover control environment, where a scientist may inspect and explore the environment via VR tools, specify science goals, and visualize the expected and actual behavior of the remote rover. The VIPER system is constructed from three generic systems, linked together via a minimal amount of customization into the integrated system. The complete system points out the power of combining plan execution, simulation, and visualization for envisioning rover behavior; it also demonstrates the utility of developing generic technologies. which can be combined in novel and useful ways.
On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information
NASA Astrophysics Data System (ADS)
Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.
Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.
Visual affective classification by combining visual and text features.
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.
Visual affective classification by combining visual and text features
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566
A Closed Circuit TV System for the Visually Handicapped and Prospects for Future Research.
ERIC Educational Resources Information Center
Genensky, S. M.; And Others
Some visually handicapped persons have difficulty reading or writing even with the aid of eyeglasses, but could be helped by visual aids which increase image magnification, light intensity or brightness, or some combination of these factors. The system described here uses closed circuit television (CCTV) to provide variable magnification from 1.4x…
NASA Technical Reports Server (NTRS)
Parris, B. L.; Cook, A. M.
1978-01-01
Data are presented that show the effects of visual and motion during cueing on pilot performance during takeoffs with engine failures. Four groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The most basic of these systems was of the instrument-only type. Visual scene simulation and/or motion simulation was added to produce the other systems. Learning curves, mean performance, and subjective data are examined. The results show that the addition of visual cueing results in significant improvement in pilot performance, but the combined use of visual and motion cueing results in far better performance.
A system to program projects to meet visual quality objectives
Fred L. Henley; Frank L. Hunsaker
1979-01-01
The U. S. Forest Service has established Visual Quality Objectives for National Forest lands and determined a method to ascertain the Visual Absorption Capability of those lands. Combining the two mapping inventories has allowed the Forest Service to retain the visual quality while managing natural resources.
Chandelier: Picturing Potential
ERIC Educational Resources Information Center
Tebbs, Trevor J.
2014-01-01
The author--artist, scientist, educator, and visual-spatial thinker--describes the genesis of, and provides insight into, an innovative, strength-based, visually dynamic computer-aided communication system called Chandelier©. This system is the powerful combination of a sophisticated, user-friendly software program and an organizational…
A knowledge based system for scientific data visualization
NASA Technical Reports Server (NTRS)
Senay, Hikmet; Ignatius, Eve
1992-01-01
A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A
Interactive data visualization leverages human visual perception and cognition to improve the accuracy and effectiveness of data analysis. When combined with automated data analytics, data visualization systems orchestrate the strengths of humans with the computational power of machines to solve problems neither approach can manage in isolation. In the intelligent transportation system domain, such systems are necessary to support decision making in large and complex data streams. In this chapter, we provide an introduction to several key topics related to the design of data visualization systems. In addition to an overview of key techniques and strategies, we will describe practicalmore » design principles. The chapter is concluded with a detailed case study involving the design of a multivariate visualization tool.« less
Evaluating Combinations of Ranked Lists and Visualizations of Inter-Document Similarity.
ERIC Educational Resources Information Center
Allan, James; Leuski, Anton; Swan, Russell; Byrd, Donald
2001-01-01
Considers how ideas from document clustering can be used to improve retrieval accuracy of ranked lists in interactive systems and how to evaluate system effectiveness. Describes a TREC (Text Retrieval Conference) study that constructed and evaluated systems that present the user with ranked lists and a visualization of inter-document similarities.…
Merlyn J. Paulson
1979-01-01
This paper outlines a project level process (V.I.S.) which utilizes very accurate and flexible computer algorithms in combination with contemporary site analysis and design techniques for visual evaluation, design and management. The process provides logical direction and connecting bridges through problem identification, information collection and verification, visual...
Ueda, Jun; Yoshimura, Hajime; Shimizu, Keiji; Hino, Megumu; Kohara, Nobuo
2017-07-01
Visual and semi-quantitative assessments of 123 I-FP-CIT single-photon emission computed tomography (SPECT) are useful for the diagnosis of dopaminergic neurodegenerative diseases (dNDD), including Parkinson's disease, dementia with Lewy bodies, progressive supranuclear palsy, multiple system atrophy, and corticobasal degeneration. However, the diagnostic value of combined visual and semi-quantitative assessment in dNDD remains unclear. Among 239 consecutive patients with a newly diagnosed possible parkinsonian syndrome who underwent 123 I-FP-CIT SPECT in our medical center, 114 patients with a disease duration less than 7 years were diagnosed as dNDD with the established criteria or as non-dNDD according to clinical judgment. We retrospectively examined their clinical characteristics and visual and semi-quantitative assessments of 123 I-FP-CIT SPECT. The striatal binding ratio (SBR) was used as a semi-quantitative measure of 123 I-FP-CIT SPECT. We calculated the sensitivity and specificity of visual assessment alone, semi-quantitative assessment alone, and combined visual and semi-quantitative assessment for the diagnosis of dNDD. SBR was correlated with visual assessment. Some dNDD patients with a normal visual assessment had an abnormal SBR, and vice versa. There was no statistically significant difference between sensitivity of the diagnosis with visual assessment alone and semi-quantitative assessment alone (91.2 vs. 86.8%, respectively, p = 0.29). Combined visual and semi-quantitative assessment demonstrated superior sensitivity (96.7%) to visual assessment (p = 0.03) or semi-quantitative assessment (p = 0.003) alone with equal specificity. Visual and semi-quantitative assessments of 123 I-FP-CIT SPECT are helpful for the diagnosis of dNDD, and combined visual and semi-quantitative assessment shows superior sensitivity with equal specificity.
Power saver circuit for audio/visual signal unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Right, R. W.
1985-02-12
A combined audio and visual signal unit with the audio and visual components actuated alternately and powered over a single cable pair in such a manner that only one of the audio and visual components is drawing power from the power supply at any given instant. Thus, the power supply is never called upon to provide more energy than that drawn by the one of the components having the greater power requirement. This is particularly advantageous when several combined audio and visual signal units are coupled in parallel on one cable pair. Typically, the signal unit may comprise a hornmore » and a strobe light for a fire alarm signalling system.« less
Scientific & Intelligence Exascale Visualization Analysis System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Money, James H.
SIEVAS provides an immersive visualization framework for connecting multiple systems in real time for data science. SIEVAS provides the ability to connect multiple COTS and GOTS products in a seamless fashion for data fusion, data analysis, and viewing. It provides this capability by using a combination of micro services, real time messaging, and web service compliant back-end system.
Liu, Xiaohan; Makino, Hideo; Kobayashi, Suguru; Maeda, Yoshinobu
2007-01-01
After a public experiment of the indoor guidance system using FLC (fluorescent light communication), we found that FLC provides a promising medium for the installation of a guidance system for the visually impaired. However, precise self-positioning was not satisfactorily achieved. In this article, we propose a new self-positioning method, one that uses a combination of RFID (Radio-frequency identification), Bluetooth and FLC. We analyzed the situation and developed a model that combined the three communication modes. Then we performed a series of experiments and get some results in the first step.
Geowall: Investigations into low-cost stereo display technologies
Steinwand, Daniel R.; Davis, Brian; Weeks, Nathan
2003-01-01
Recently, the combination of new projection technology, fast, low-cost graphics cards, and Linux-powered personal computers has made it possible to provide a stereoprojection and stereoviewing system that is much more affordable than previous commercial solutions. These Geowall systems are low-cost visualization systems built with commodity off-the-shelf components, run on open-source (and other) operating systems, and using open-source applications software. In short, they are ?Beowulf-class? visualization systems that provide a cost-effective way for the U. S. Geological Survey to broaden participation in the visualization community and view stereoimagery and three-dimensional models2.
Visual analysis and exploration of complex corporate shareholder networks
NASA Astrophysics Data System (ADS)
Tekušová, Tatiana; Kohlhammer, Jörn
2008-01-01
The analysis of large corporate shareholder network structures is an important task in corporate governance, in financing, and in financial investment domains. In a modern economy, large structures of cross-corporation, cross-border shareholder relationships exist, forming complex networks. These networks are often difficult to analyze with traditional approaches. An efficient visualization of the networks helps to reveal the interdependent shareholding formations and the controlling patterns. In this paper, we propose an effective visualization tool that supports the financial analyst in understanding complex shareholding networks. We develop an interactive visual analysis system by combining state-of-the-art visualization technologies with economic analysis methods. Our system is capable to reveal patterns in large corporate shareholder networks, allows the visual identification of the ultimate shareholders, and supports the visual analysis of integrated cash flow and control rights. We apply our system on an extensive real-world database of shareholder relationships, showing its usefulness for effective visual analysis.
Flight simulator with spaced visuals
NASA Technical Reports Server (NTRS)
Gilson, Richard D. (Inventor); Thurston, Marlin O. (Inventor); Olson, Karl W. (Inventor); Ventola, Ronald W. (Inventor)
1980-01-01
A flight simulator arrangement wherein a conventional, movable base flight trainer is combined with a visual cue display surface spaced a predetermined distance from an eye position within the trainer. Thus, three degrees of motive freedom (roll, pitch and crab) are provided for a visual proprioceptive, and vestibular cue system by the trainer while the remaining geometric visual cue image alterations are developed by a video system. A geometric approach to computing runway image eliminates a need to electronically compute trigonometric functions, while utilization of a line generator and designated vanishing point at the video system raster permits facile development of the images of the longitudinal edges of the runway.
Weighted feature selection criteria for visual servoing of a telerobot
NASA Technical Reports Server (NTRS)
Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.
1989-01-01
Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.
ERIC Educational Resources Information Center
Harbusch, Karin; Hausdörfer, Annette
2016-01-01
COMPASS is an e-learning system that can visualize grammar errors during sentence production in German as a first or second language. Via drag-and-drop dialogues, it allows users to freely select word forms from a lexicon and to combine them into phrases and sentences. The system's core component is a natural-language generator that, for every new…
The Visual System of Zebrafish and its Use to Model Human Ocular Diseases
Gestri, Gaia; Link, Brian A; Neuhauss, Stephan CF
2011-01-01
Free swimming zebrafish larvae depend mainly on their sense of vision to evade predation and to catch prey. Hence there is strong selective pressure on the fast maturation of visual function and indeed the visual system already supports a number of visually-driven behaviors in the newly hatched larvae. The ability to exploit the genetic and embryonic accessibility of the zebrafish in combination with a behavioral assessment of visual system function has made the zebrafish a popular model to study vision and its diseases. Here, we review the anatomy, physiology and development of the zebrafish eye as the basis to relate the contributions of the zebrafish to our understanding of human ocular diseases. PMID:21595048
VisGets: coordinated visualizations for web-based information exploration and discovery.
Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey
2008-01-01
In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.
Computer aided system for parametric design of combination die
NASA Astrophysics Data System (ADS)
Naranje, Vishal G.; Hussein, H. M. A.; Kumar, S.
2017-09-01
In this paper, a computer aided system for parametric design of combination dies is presented. The system is developed using knowledge based system technique of artificial intelligence. The system is capable to design combination dies for production of sheet metal parts having punching and cupping operations. The system is coded in Visual Basic and interfaced with AutoCAD software. The low cost of the proposed system will help die designers of small and medium scale sheet metal industries for design of combination dies for similar type of products. The proposed system is capable to reduce design time and efforts of die designers for design of combination dies.
Wiebrands, Michael; Malajczuk, Chris J; Woods, Andrew J; Rohl, Andrew L; Mancera, Ricardo L
2018-06-21
Molecular graphics systems are visualization tools which, upon integration into a 3D immersive environment, provide a unique virtual reality experience for research and teaching of biomolecular structure, function and interactions. We have developed a molecular structure and dynamics application, the Molecular Dynamics Visualization tool, that uses the Unity game engine combined with large scale, multi-user, stereoscopic visualization systems to deliver an immersive display experience, particularly with a large cylindrical projection display. The application is structured to separate the biomolecular modeling and visualization systems. The biomolecular model loading and analysis system was developed as a stand-alone C# library and provides the foundation for the custom visualization system built in Unity. All visual models displayed within the tool are generated using Unity-based procedural mesh building routines. A 3D user interface was built to allow seamless dynamic interaction with the model while being viewed in 3D space. Biomolecular structure analysis and display capabilities are exemplified with a range of complex systems involving cell membranes, protein folding and lipid droplets.
Understanding visualization: a formal approach using category theory and semiotics.
Vickers, Paul; Faith, Joe; Rossiter, Nick
2013-06-01
This paper combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: Relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This paper generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not.
Art, illusion and the visual system.
Livingstone, M S
1988-01-01
The verve of op art, the serenity of a pointillist painting and the 3-D puzzlement of an Escher print derive from the interplay of the art with the anatomy of the visual system. Color, shape and movement are each processed separately by different structures in the eye and brain and then are combined to produce the experience we call perception.
MRIVIEW: An interactive computational tool for investigation of brain structure and function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ranken, D.; George, J.
MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.
Bansal, Reema; Singh, Ramandeep; Takkar, Aastha; Lal, Vivek
2017-11-01
A 15-year-old healthy boy developed acute, rapidly progressing visual loss in left eye following herpes zoster dermatitis, with a combined central retinal artery occlusion (CRAO) and central retinal vein occlusion (CRVO), along with optic perineuritis. Laboratory tests were negative. Despite an empirical, intensive antiviral treatment with systemic corticosteroids, and vision could not be restored in the affected eye. Herpes zoster dermatitis, in an immunocompetent individual, may be associated with a combined CRAO and CRVO along with optic perineuritis, leading to profound visual loss.
ERIC Educational Resources Information Center
Ali, Emad
2009-01-01
The Picture Exchange Communication System (PECS) is an augmentative and alternative communication program (Frost & Bondy, 2002). Although PECS has been effectively used to teach functional requesting skills for children with autism, mental retardation, visual impairment, and physical disabilities (e.g., Anderson, Moore, & Bourne, 2007; Chambers &…
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Inventor); Adelstein, Bernard D. (Inventor); Anderson, Mark R. (Inventor); Beutter, Brent R. (Inventor); Ahumada, Albert J., Jr. (Inventor); McCann, Robert S. (Inventor)
2014-01-01
A method and apparatus for reducing the visual blur of an object being viewed by an observer experiencing vibration. In various embodiments of the present invention, the visual blur is reduced through stroboscopic image modulation (SIM). A SIM device is operated in an alternating "on/off" temporal pattern according to a SIM drive signal (SDS) derived from the vibration being experienced by the observer. A SIM device (controlled by a SIM control system) operates according to the SDS serves to reduce visual blur by "freezing" (or reducing an image's motion to a slow drift) the visual image of the viewed object. In various embodiments, the SIM device is selected from the group consisting of illuminator(s), shutter(s), display control system(s), and combinations of the foregoing (including the use of multiple illuminators, shutters, and display control systems).
Advanced optical measuring systems for measuring the properties of fluids and structures
NASA Technical Reports Server (NTRS)
Decker, A. J.
1986-01-01
Four advanced optical models are reviewed for the measurement of visualization of flow and structural properties. Double-exposure, diffuse-illumination, holographic interferometry can be used for three-dimensional flow visualization. When this method is combined with optical heterodyning, precise measurements of structural displacements or fluid density are possible. Time-average holography is well known as a method for displaying vibrational mode shapes, but it also can be used for flow visualization and flow measurements. Deflectometry is used to measure or visualize the deflection of light rays from collimation. Said deflection occurs because of refraction in a fluid or because of reflection from a tilted surface. The moire technique for deflectometry, when combined with optical heterodyning, permits very precise measurements of these quantities. The rainbow schlieren method of deflectometry allows varying deflection angles to be encoded with colors for visualization.
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Ma, Teng; Li, Hui; Deng, Lili; Yang, Hao; Lv, Xulin; Li, Peiyang; Li, Fali; Zhang, Rui; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2017-04-01
Movement control is an important application for EEG-BCI (EEG-based brain-computer interface) systems. A single-modality BCI cannot provide an efficient and natural control strategy, but a hybrid BCI system that combines two or more different tasks can effectively overcome the drawbacks encountered in single-modality BCI control. In the current paper, we developed a new hybrid BCI system by combining MI (motor imagery) and mVEP (motion-onset visual evoked potential), aiming to realize the more efficient 2D movement control of a cursor. The offline analysis demonstrates that the hybrid BCI system proposed in this paper could evoke the desired MI and mVEP signal features simultaneously, and both are very close to those evoked in the single-modality BCI task. Furthermore, the online 2D movement control experiment reveals that the proposed hybrid BCI system could provide more efficient and natural control commands. The proposed hybrid BCI system is compensative to realize efficient 2D movement control for a practical online system, especially for those situations in which P300 stimuli are not suitable to be applied.
Spectrally-balanced chromatic approach-lighting system
NASA Technical Reports Server (NTRS)
Chase, W. D.
1977-01-01
Approach lighting system employing combinations of red and blue lights reduces problem of color-based optical illusions. System exploits inherent chromatic aberration of eye to create three-dimensional effect, giving pilot visual clues of position.
Information visualization of the minority game
NASA Astrophysics Data System (ADS)
Jiang, W.; Herbert, R. D.; Webber, R.
2008-02-01
Many dynamical systems produce large quantities of data. How can the system be understood from the output data? Often people are simply overwhelmed by the data. Traditional tools such as tables and plots are often not adequate, and new techniques are needed to help people to analyze the system. In this paper, we propose the use of two spacefilling visualization tools to examine the output from a complex agent-based financial model. We measure the effectiveness and performance of these tools through usability experiments. Based on the experimental results, we develop two new visualization techniques that combine the advantages and discard the disadvantages of the information visualization tools. The model we use is an evolutionary version of the Minority Game which simulates a financial market.
Real-time scalable visual analysis on mobile devices
NASA Astrophysics Data System (ADS)
Pattath, Avin; Ebert, David S.; May, Richard A.; Collins, Timothy F.; Pike, William
2008-02-01
Interactive visual presentation of information can help an analyst gain faster and better insight from data. When combined with situational or context information, visualization on mobile devices is invaluable to in-field responders and investigators. However, several challenges are posed by the form-factor of mobile devices in developing such systems. In this paper, we classify these challenges into two broad categories - issues in general mobile computing and issues specific to visual analysis on mobile devices. Using NetworkVis and Infostar as example systems, we illustrate some of the techniques that we employed to overcome many of the identified challenges. NetworkVis is an OpenVG-based real-time network monitoring and visualization system developed for Windows Mobile devices. Infostar is a flash-based interactive, real-time visualization application intended to provide attendees access to conference information. Linked time-synchronous visualization, stylus/button-based interactivity, vector graphics, overview-context techniques, details-on-demand and statistical information display are some of the highlights of these applications.
Volcanic Gas Emissions Mapping Using a Mass Spectrometer System
NASA Technical Reports Server (NTRS)
Griffin, Timothy P.; Diaz, J. Andres
2008-01-01
The visualization of hazardous gaseous emissions at volcanoes using in-situ mass spectrometry (MS) is a key step towards a better comprehension of the geophysical phenomena surrounding eruptive activity. In-Situ gas data consisting of helium, carbon dioxide, sulfur dioxide, and other gas species, were acquired with an MS system. MS and global position system (GPS) data were plotted on ground imagery, topography, and remote sensing data collected by a host of instruments during the second Costa Rica Airborne Research and Technology Applications (CARTA) mission This combination of gas and imaging data allowed 3-dimensional (3-D) visualization of the volcanic plume end the mapping of gas concentration at several volcanic structures and urban areas This combined set of data has demonstrated a better tool to assess hazardous conditions by visualizing and modeling of possible scenarios of volcanic activity. The MS system is used for in-situ measurement of three-dimensional gas concentrations at different volcanic locations with three different transportation platforms, aircraft, auto, and hand carried. The demonstration for urban contamination mapping is also presented as another possible use for the MS system.
Linear and Non-Linear Visual Feature Learning in Rat and Humans
Bossens, Christophe; Op de Beeck, Hans P.
2016-01-01
The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201
An infrared/video fusion system for military robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A.W.; Roberts, R.S.
1997-08-05
Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less
The SCHEIE Visual Field Grading System
Sankar, Prithvi S.; O’Keefe, Laura; Choi, Daniel; Salowe, Rebecca; Miller-Ellis, Eydie; Lehman, Amanda; Addis, Victoria; Ramakrishnan, Meera; Natesh, Vikas; Whitehead, Gideon; Khachatryan, Naira; O’Brien, Joan
2017-01-01
Objective No method of grading visual field (VF) defects has been widely accepted throughout the glaucoma community. The SCHEIE (Systematic Classification of Humphrey visual fields-Easy Interpretation and Evaluation) grading system for glaucomatous visual fields was created to convey qualitative and quantitative information regarding visual field defects in an objective, reproducible, and easily applicable manner for research purposes. Methods The SCHEIE grading system is composed of a qualitative and quantitative score. The qualitative score consists of designation in one or more of the following categories: normal, central scotoma, paracentral scotoma, paracentral crescent, temporal quadrant, nasal quadrant, peripheral arcuate defect, expansive arcuate, or altitudinal defect. The quantitative component incorporates the Humphrey visual field index (VFI), location of visual defects for superior and inferior hemifields, and blind spot involvement. Accuracy and speed at grading using the qualitative and quantitative components was calculated for non-physician graders. Results Graders had a median accuracy of 96.67% for their qualitative scores and a median accuracy of 98.75% for their quantitative scores. Graders took a mean of 56 seconds per visual field to assign a qualitative score and 20 seconds per visual field to assign a quantitative score. Conclusion The SCHEIE grading system is a reproducible tool that combines qualitative and quantitative measurements to grade glaucomatous visual field defects. The system aims to standardize clinical staging and to make specific visual field defects more easily identifiable. Specific patterns of visual field loss may also be associated with genetic variants in future genetic analysis. PMID:28932621
Albinism: Particular Attention to the Ocular Motor System
Hertle, Richard W.
2013-01-01
The purpose of this report is to summarize an understanding of the ocular motor system in patients with albinism. Other than the association of vertical eccentric gaze null positions and asymmetric, (a) periodic alternating nystagmus in a large percentage of patients, the ocular motor system in human albinism does not contain unique pathology, rather has “typical” types of infantile ocular oscillations and binocular disorders. Both the ocular motor and afferent visual system are affected to varying degrees in patients with albinism, thus, combined treatment of both systems will maximize visual function. PMID:24014991
A novel role for visual perspective cues in the neural computation of depth.
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C
2015-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.
Design and implementation of visualization methods for the CHANGES Spatial Decision Support System
NASA Astrophysics Data System (ADS)
Cristal, Irina; van Westen, Cees; Bakker, Wim; Greiving, Stefan
2014-05-01
The CHANGES Spatial Decision Support System (SDSS) is a web-based system aimed for risk assessment and the evaluation of optimal risk reduction alternatives at local level as a decision support tool in long-term natural risk management. The SDSS use multidimensional information, integrating thematic, spatial, temporal and documentary data. The role of visualization in this context becomes of vital importance for efficiently representing each dimension. This multidimensional aspect of the required for the system risk information, combined with the diversity of the end-users imposes the use of sophisticated visualization methods and tools. The key goal of the present work is to exploit efficiently the large amount of data in relation to the needs of the end-user, utilizing proper visualization techniques. Three main tasks have been accomplished for this purpose: categorization of the end-users, the definition of system's modules and the data definition. The graphical representation of the data and the visualization tools were designed to be relevant to the data type and the purpose of the analysis. Depending on the end-users category, each user should have access to different modules of the system and thus, to the proper visualization environment. The technologies used for the development of the visualization component combine the latest and most innovative open source JavaScript frameworks, such as OpenLayers 2.13.1, ExtJS 4 and GeoExt 2. Moreover, the model-view-controller (MVC) pattern is used in order to ensure flexibility of the system at the implementation level. Using the above technologies, the visualization techniques implemented so far offer interactive map navigation, querying and comparison tools. The map comparison tools are of great importance within the SDSS and include the following: swiping tool for comparison of different data of the same location; raster subtraction for comparison of the same phenomena varying in time; linked views for comparison of data from different locations and a time slider tool for monitoring changes in spatio-temporal data. All these techniques are part of the interactive interface of the system and make use of spatial and spatio-temporal data. Further significant aspects of the visualization component include conventional cartographic techniques and visualization of non-spatial data. The main expectation from the present work is to offer efficient visualization of risk-related data in order to facilitate the decision making process, which is the final purpose of the CHANGES SDSS. This work is part of the "CHANGES" project, funded by the European Community's 7th Framework Programme.
Setting technical standards for visual assessment procedures
Kenneth H. Craik; Nickolaus R. Feimer
1979-01-01
Under the impetus of recent legislative and administrative mandates concerning analysis and management of the landscape, governmental agencies are being called upon to adopt or develop visual resource and impact assessment (VRIA) systems. A variety of techniques that combine methods of psychological assessment and landscape analysis to serve these purposes is being...
A Hybrid Synthetic Vision System for the Tele-operation of Unmanned Vehicles
NASA Technical Reports Server (NTRS)
Delgado, Frank; Abernathy, Mike
2004-01-01
A system called SmartCam3D (SC3D) has been developed to provide enhanced situational awareness for operators of a remotely piloted vehicle. SC3D is a Hybrid Synthetic Vision System (HSVS) that combines live sensor data with information from a Synthetic Vision System (SVS). By combining the dual information sources, the operators are afforded the advantages of each approach. The live sensor system provides real-time information for the region of interest. The SVS provides information rich visuals that will function under all weather and visibility conditions. Additionally, the combination of technologies allows the system to circumvent some of the limitations from each approach. Video sensor systems are not very useful when visibility conditions are hampered by rain, snow, sand, fog, and smoke, while a SVS can suffer from data freshness problems. Typically, an aircraft or satellite flying overhead collects the data used to create the SVS visuals. The SVS data could have been collected weeks, months, or even years ago. To that extent, the information from an SVS visual could be outdated and possibly inaccurate. SC3D was used in the remote cockpit during flight tests of the X-38 132 and 131R vehicles at the NASA Dryden Flight Research Center. SC3D was also used during the operation of military Unmanned Aerial Vehicles. This presentation will provide an overview of the system, the evolution of the system, the results of flight tests, and future plans. Furthermore, the safety benefits of the SC3D over traditional and pure synthetic vision systems will be discussed.
Visualizing the spinal neuronal dynamics of locomotion
NASA Astrophysics Data System (ADS)
Subramanian, Kalpathi R.; Bashor, D. P.; Miller, M. T.; Foster, J. A.
2004-06-01
Modern imaging and simulation techniques have enhanced system-level understanding of neural function. In this article, we present an application of interactive visualization to understanding neuronal dynamics causing locomotion of a single hip joint, based on pattern generator output of the spinal cord. Our earlier work visualized cell-level responses of multiple neuronal populations. However, the spatial relationships were abstract, making communication with colleagues difficult. We propose two approaches to overcome this: (1) building a 3D anatomical model of the spinal cord with neurons distributed inside, animated by the simulation and (2) adding limb movements predicted by neuronal activity. The new system was tested using a cat walking central pattern generator driving a pair of opposed spinal motoneuron pools. Output of opposing motoneuron pools was combined into a single metric, called "Net Neural Drive", which generated angular limb movement in proportion to its magnitude. Net neural drive constitutes a new description of limb movement control. The combination of spatial and temporal information in the visualizations elegantly conveys the neural activity of the output elements (motoneurons), as well as the resulting movement. The new system encompasses five biological levels of organization from ion channels to observed behavior. The system is easily scalable, and provides an efficient interactive platform for rapid hypothesis testing.
Unconscious analyses of visual scenes based on feature conjunctions.
Tachibana, Ryosuke; Noguchi, Yasuki
2015-06-01
To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).
Octopus vulgaris uses visual information to determine the location of its arm.
Gutnick, Tamar; Byrne, Ruth A; Hochner, Binyamin; Kuba, Michael
2011-03-22
Octopuses are intelligent, soft-bodied animals with keen senses that perform reliably in a variety of visual and tactile learning tasks. However, researchers have found them disappointing in that they consistently fail in operant tasks that require them to combine central nervous system reward information with visual and peripheral knowledge of the location of their arms. Wells claimed that in order to filter and integrate an abundance of multisensory inputs that might inform the animal of the position of a single arm, octopuses would need an exceptional computing mechanism, and "There is no evidence that such a system exists in Octopus, or in any other soft bodied animal." Recent electrophysiological experiments, which found no clear somatotopic organization in the higher motor centers, support this claim. We developed a three-choice maze that required an octopus to use a single arm to reach a visually marked goal compartment. Using this operant task, we show for the first time that Octopus vulgaris is capable of guiding a single arm in a complex movement to a location. Thus, we claim that octopuses can combine peripheral arm location information with visual input to control goal-directed complex movements. Copyright © 2011 Elsevier Ltd. All rights reserved.
Digital to analog conversion and visual evaluation of Thematic Mapper data
McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.
1985-01-01
As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.
Digital to Analog Conversion and Visual Evaluation of Thematic Mapper Data
McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.
1985-01-01
As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta
Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less
Stereoscopic 3D entertainment and its effect on viewing comfort: comparison of children and adults.
Pölönen, Monika; Järvenpää, Toni; Bilcu, Beatrice
2013-01-01
Children's and adults' viewing comfort during stereoscopic three-dimensional film viewing and computer game playing was studied. Certain mild changes in visual function, heterophoria and near point of accommodation values, as well as eyestrain and visually induced motion sickness levels were found when single setups were compared. The viewing system had an influence on viewing comfort, in particular for eyestrain levels, but no clear difference between two- and three-dimensional systems was found. Additionally, certain mild changes in visual functions and visually induced motion sickness levels between adults and children were found. In general, all of the system-task combinations caused mild eyestrain and possible changes in visual functions, but these changes in magnitude were small. According to subjective opinions that further support these measurements, using a stereoscopic three-dimensional system for up to 2 h was acceptable for most of the users regardless of their age. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
USING VISUAL PLUMES PREDICTIONS TO MODULATE COMBINED SEWER OVERFLOW (CSO) RATES
High concentrations of pathogens and toxic residues in creeks and rivers can pose risks to human health and ecological systems. Combined Sewer Overflows (CSOs) discharging into these watercourses often contribute significantly to elevating pollutant concentrations during wet weat...
Short-term retention of pictures and words: evidence for dual coding systems.
Pellegrino, J W; Siegel, A W; Dhawan, M
1975-03-01
The recall of picture and word triads was examined in three experiments that manipulated the type of distraction in a Brown-Peterson short-term retention task. In all three experiments recall of pictures was superior to words under auditory distraction conditions. Visual distraction produced high performance levels with both types of stimuli, whereas combined auditory and visual distraction significantly reduced picture recall without further affecting word recall. The results were interpreted in terms of the dual coding hypothesis and indicated that pictures are encoded into separate visual and acoustic processing systems while words are primarily acoustically encoded.
Matsumiya, Kazumichi
2013-10-01
Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.
NASA Astrophysics Data System (ADS)
Ma, Teng; Li, Hui; Deng, Lili; Yang, Hao; Lv, Xulin; Li, Peiyang; Li, Fali; Zhang, Rui; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2017-04-01
Objective. Movement control is an important application for EEG-BCI (EEG-based brain-computer interface) systems. A single-modality BCI cannot provide an efficient and natural control strategy, but a hybrid BCI system that combines two or more different tasks can effectively overcome the drawbacks encountered in single-modality BCI control. Approach. In the current paper, we developed a new hybrid BCI system by combining MI (motor imagery) and mVEP (motion-onset visual evoked potential), aiming to realize the more efficient 2D movement control of a cursor. Main result. The offline analysis demonstrates that the hybrid BCI system proposed in this paper could evoke the desired MI and mVEP signal features simultaneously, and both are very close to those evoked in the single-modality BCI task. Furthermore, the online 2D movement control experiment reveals that the proposed hybrid BCI system could provide more efficient and natural control commands. Significance. The proposed hybrid BCI system is compensative to realize efficient 2D movement control for a practical online system, especially for those situations in which P300 stimuli are not suitable to be applied.
NASA Astrophysics Data System (ADS)
Guy, Nathaniel
This thesis explores new ways of looking at telemetry data, from a time-correlative perspective, in order to see patterns within the data that may suggest root causes of system faults. It was thought initially that visualizing an animated Pearson Correlation Coefficient (PCC) matrix for telemetry channels would be sufficient to give new understanding; however, testing showed that the high dimensionality and inability to easily look at change over time in this approach impeded understanding. Different correlative techniques, combined with the time curve visualization proposed by Bach et al (2015), were adapted to visualize both raw telemetry and telemetry data correlations. Review revealed that these new techniques give insights into the data, and an intuitive grasp of data families, which show the effectiveness of this approach for enhancing system understanding and assisting with root cause analysis for complex aerospace systems.
Chronodes: Interactive Multifocus Exploration of Event Sequences
POLACK, PETER J.; CHEN, SHANG-TSE; KAHNG, MINSUK; DE BARBARO, KAYA; BASOLE, RAHUL; SHARMIN, MOUSHUMI; CHAU, DUEN HORNG
2018-01-01
The advent of mobile health (mHealth) technologies challenges the capabilities of current visualizations, interactive tools, and algorithms. We present Chronodes, an interactive system that unifies data mining and human-centric visualization techniques to support explorative analysis of longitudinal mHealth data. Chronodes extracts and visualizes frequent event sequences that reveal chronological patterns across multiple participant timelines of mHealth data. It then combines novel interaction and visualization techniques to enable multifocus event sequence analysis, which allows health researchers to interactively define, explore, and compare groups of participant behaviors using event sequence combinations. Through summarizing insights gained from a pilot study with 20 behavioral and biomedical health experts, we discuss Chronodes’s efficacy and potential impact in the mHealth domain. Ultimately, we outline important open challenges in mHealth, and offer recommendations and design guidelines for future research. PMID:29515937
Dasgupta, Aritra; Lee, Joon-Yong; Wilson, Ryan; Lafrance, Robert A; Cramer, Nick; Cook, Kristin; Payne, Samuel
2017-01-01
Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems.
A WebGIS-based system for analyzing and visualizing air quality data for Shanghai Municipality
NASA Astrophysics Data System (ADS)
Wang, Manyi; Liu, Chaoshun; Gao, Wei
2014-10-01
An online visual analytical system based on Java Web and WebGIS for air quality data for Shanghai Municipality was designed and implemented to quantitatively analyze and qualitatively visualize air quality data. By analyzing the architecture of WebGIS and Java Web, we firstly designed the overall scheme for system architecture, then put forward the software and hardware environment and also determined the main function modules for the system. The visual system was ultimately established with the DIV + CSS layout method combined with JSP, JavaScript, and some other computer programming languages based on the Java programming environment. Moreover, Struts, Spring, and Hibernate frameworks (SSH) were integrated in the system for the purpose of easy maintenance and expansion. To provide mapping service and spatial analysis functions, we selected ArcGIS for Server as the GIS server. We also used Oracle database and ESRI file geodatabase to store spatial data and non-spatial data in order to ensure the data security. In addition, the response data from the Web server are resampled to implement rapid visualization through the browser. The experimental successes indicate that this system can quickly respond to user's requests, and efficiently return the accurate processing results.
Sundvall, Erik; Nyström, Mikael; Forss, Mattias; Chen, Rong; Petersson, Håkan; Ahlfeldt, Hans
2007-01-01
This paper describes selected earlier approaches to graphically relating events to each other and to time; some new combinations are also suggested. These are then combined into a unified prototyping environment for visualization and navigation of electronic health records. Google Earth (GE) is used for handling display and interaction of clinical information stored using openEHR data structures and 'archetypes'. The strength of the approach comes from GE's sophisticated handling of detail levels, from coarse overviews to fine-grained details that has been combined with linear, polar and region-based views of clinical events related to time. The system should be easy to learn since all the visualization styles can use the same navigation. The structured and multifaceted approach to handling time that is possible with archetyped openEHR data lends itself well to visualizing and integration with openEHR components is provided in the environment.
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
A Visual Analytics Approach for Station-Based Air Quality Data
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-01-01
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117
A novel role for visual perspective cues in the neural computation of depth
Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.
2014-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667
A Visual Analytics Approach for Station-Based Air Quality Data.
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-12-24
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.
Visualizing Rank Time Series of Wikipedia Top-Viewed Pages.
Xia, Jing; Hou, Yumeng; Chen, Yingjie Victor; Qian, Zhenyu Cheryl; Ebert, David S; Chen, Wei
2017-01-01
Visual clutter is a common challenge when visualizing large rank time series data. WikiTopReader, a reader of Wikipedia page rank, lets users explore connections among top-viewed pages by connecting page-rank behaviors with page-link relations. Such a combination enhances the unweighted Wikipedia page-link network and focuses attention on the page of interest. A set of user evaluations shows that the system effectively represents evolving ranking patterns and page-wise correlation.
Solar System Symphony: Combining astronomy with live classical music
NASA Astrophysics Data System (ADS)
Kremer, Kyle; WorldWide Telescope
2017-01-01
Solar System Symphony is an educational outreach show which combines astronomy visualizations and live classical music. As musicians perform excerpts from Holst’s “The Planets” and other orchestral works, visualizations developed using WorldWide Telescope and NASA images and animations are projected on-stage. Between each movement of music, a narrator guides the audience through scientific highlights of the solar system. The content of Solar System Symphony is geared toward a general audience, particularly targeting K-12 students. The hour-long show not only presents a new medium for exposing a broad audience to astronomy, but also provides universities an effective tool for facilitating interdisciplinary collaboration between two divergent fields. The show was premiered at Northwestern University in May 2016 in partnership with Northwestern’s Bienen School of Music and was recently performed at the Colburn Conservatory of Music in November 2016.
The visual attention saliency map for movie retrospection
NASA Astrophysics Data System (ADS)
Rogalska, Anna; Napieralski, Piotr
2018-04-01
The visual saliency map is becoming important and challenging for many scientific disciplines (robotic systems, psychophysics, cognitive neuroscience and computer science). Map created by the model indicates possible salient regions by taking into consideration face presence and motion which is essential in motion pictures. By combining we can obtain credible saliency map with a low computational cost.
Modelling Subjectivity in Visual Perception of Orientation for Image Retrieval.
ERIC Educational Resources Information Center
Sanchez, D.; Chamorro-Martinez, J.; Vila, M. A.
2003-01-01
Discussion of multimedia libraries and the need for storage, indexing, and retrieval techniques focuses on the combination of computer vision and data mining techniques to model high-level concepts for image retrieval based on perceptual features of the human visual system. Uses fuzzy set theory to measure users' assessments and to capture users'…
ERIC Educational Resources Information Center
McGrady, Harold J.; Olson, Don A.
To describe and compare the psychosensory functioning of normal children and children with specific learning disabilities, 62 learning disabled and 68 normal children were studied. Each child was given a battery of thirteen subtests on an automated psychosensory system representing various combinations of auditory and visual intra- and…
Multimodal visualization interface for data management, self-learning and data presentation.
Van Sint Jan, S; Demondion, X; Clapworthy, G; Louryan, S; Rooze, M; Cotten, A; Viceconti, M
2006-10-01
A multimodal visualization software, called the Data Manager (DM), has been developed to increase interdisciplinary communication around the topic of visualization and modeling of various aspects of the human anatomy. Numerous tools used in Radiology are integrated in the interface that runs on standard personal computers. The available tools, combined to hierarchical data management and custom layouts, allow analyzing of medical imaging data using advanced features outside radiological premises (for example, for patient review, conference presentation or tutorial preparation). The system is free, and based on an open-source software development architecture, and therefore updates of the system for custom applications are possible.
Visualization of upconverting nanoparticles in strongly scattering media
Khaydukov, E. V.; Semchishen, V. A.; Seminogov, V. N.; Nechaev, A. V.; Zvyagin, A. V.; Sokolov, V. I.; Akhmanov, A. S.; Panchenko, V. Ya.
2014-01-01
Optical visualization systems are needed in medical applications for determining the localization of deep-seated luminescent markers in biotissues. The spatial resolution of such systems is limited by the scattering of the tissues. We present a novel epi-luminescent technique, which allows a 1.8-fold increase in the lateral spatial resolution in determining the localization of markers lying deep in a scattering medium compared to the traditional visualization techniques. This goal is attained by using NaYF4:Yb3+Tm3+@NaYF4 core/shell nanoparticles and special optical fiber probe with combined channels for the excitation and detection of anti-Stokes luminescence signals. PMID:24940552
Real-Time Visualization of Network Behaviors for Situational Awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Best, Daniel M.; Bohn, Shawn J.; Love, Douglas V.
Plentiful, complex, and dynamic data make understanding the state of an enterprise network difficult. Although visualization can help analysts understand baseline behaviors in network traffic and identify off-normal events, visual analysis systems often do not scale well to operational data volumes (in the hundreds of millions to billions of transactions per day) nor to analysis of emergent trends in real-time data. We present a system that combines multiple, complementary visualization techniques coupled with in-stream analytics, behavioral modeling of network actors, and a high-throughput processing platform called MeDICi. This system provides situational understanding of real-time network activity to help analysts takemore » proactive response steps. We have developed these techniques using requirements gathered from the government users for which the tools are being developed. By linking multiple visualization tools to a streaming analytic pipeline, and designing each tool to support a particular kind of analysis (from high-level awareness to detailed investigation), analysts can understand the behavior of a network across multiple levels of abstraction.« less
A Conserved Developmental Mechanism Builds Complex Visual Systems in Insects and Vertebrates
Joly, Jean-Stéphane; Recher, Gaelle; Brombin, Alessandro; Ngo, Kathy; Hartenstein, Volker
2016-01-01
The visual systems of vertebrates and many other bilaterian clades consist of complex neural structures guiding a wide spectrum of behaviors. Homologies at the level of cell types and even discrete neural circuits have been proposed, but many questions of how the architecture of visual neuropils evolved among different phyla remain open. In this review we argue that the profound conservation of genetic and developmental steps generating the eye and its target neuropils in fish and fruit flies supports a homology between some core elements of bilaterian visual circuitries. Fish retina and tectum, and fly optic lobe, develop from a partitioned, unidirectionally proliferating neurectodermal domain that combines slowly dividing neuroepithelial stem cells and rapidly amplifying progenitors with shared genetic signatures to generate large numbers and different types of neurons in a temporally ordered way. This peculiar ‘conveyor belt neurogenesis’ could play an essential role in generating the topographically ordered circuitry of the visual system. PMID:27780043
NASA Astrophysics Data System (ADS)
Lahti, Paul M.; Motyka, Eric J.; Lancashire, Robert J.
2000-05-01
A straightforward procedure is described to combine computation of molecular vibrational modes using commonly available molecular modeling programs with visualization of the modes using advanced features of the MDL Information Systems Inc. Chime World Wide Web browser plug-in. Minor editing of experimental spectra that are stored in the JCAMP-DX format allows linkage of IR spectral frequency ranges to Chime molecular display windows. The spectra and animation files can be combined by Hypertext Markup Language programming to allow interactive linkage between experimental spectra and computationally generated vibrational displays. Both the spectra and the molecular displays can be interactively manipulated to allow the user maximum control of the objects being viewed. This procedure should be very valuable not only for aiding students through visual linkage of spectra and various vibrational animations, but also by assisting them in learning the advantages and limitations of computational chemistry by comparison to experiment.
Automated Visual Cognitive Tasks for Recording Neural Activity Using a Floor Projection Maze
Kent, Brendon W.; Yang, Fang-Chi; Burwell, Rebecca D.
2014-01-01
Neuropsychological tasks used in primates to investigate mechanisms of learning and memory are typically visually guided cognitive tasks. We have developed visual cognitive tasks for rats using the Floor Projection Maze1,2 that are optimized for visual abilities of rats permitting stronger comparisons of experimental findings with other species. In order to investigate neural correlates of learning and memory, we have integrated electrophysiological recordings into fully automated cognitive tasks on the Floor Projection Maze1,2. Behavioral software interfaced with an animal tracking system allows monitoring of the animal's behavior with precise control of image presentation and reward contingencies for better trained animals. Integration with an in vivo electrophysiological recording system enables examination of behavioral correlates of neural activity at selected epochs of a given cognitive task. We describe protocols for a model system that combines automated visual presentation of information to rodents and intracranial reward with electrophysiological approaches. Our model system offers a sophisticated set of tools as a framework for other cognitive tasks to better isolate and identify specific mechanisms contributing to particular cognitive processes. PMID:24638057
The implementation of thermal image visualization by HDL based on pseudo-color
NASA Astrophysics Data System (ADS)
Zhu, Yong; Zhang, JiangLing
2004-11-01
The pseudo-color method which maps the sampled data to intuitive perception colors is a kind of powerful visualization way. And the all-around system of pseudo-color visualization, which includes the primary principle, model and HDL (Hardware Description Language) implementation for the thermal images, is expatiated on in the paper. The thermal images whose signal is modulated as video reflect the temperature distribution of measured object, so they have the speciality of mass and real-time. The solution to the intractable problem is as follows: First, the reasonable system, i.e. the combining of global pseudo-color visualization and local special area accurate measure, muse be adopted. Then, the HDL pseudo-color algorithms in SoC (System on Chip) carry out the system to ensure the real-time. Finally, the key HDL algorithms for direct gray levels connection coding, proportional gray levels map coding and enhanced gray levels map coding are presented, and its simulation results are showed. The pseudo-color visualization of thermal images implemented by HDL in the paper has effective application in the aspect of electric power equipment test and medical health diagnosis.
Neuromorphic VLSI vision system for real-time texture segregation.
Shimonomura, Kazuhiro; Yagi, Tetsuya
2008-10-01
The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.
Dai, Yun; Zhao, Lina; Xiao, Fei; Zhao, Haoxin; Bao, Hua; Zhou, Hong; Zhou, Yifeng; Zhang, Yudong
2015-02-10
An adaptive optics visual simulation combined with a perceptual learning (PL) system based on a 35-element bimorph deformable mirror (DM) was established. The larger stroke and smaller size of the bimorph DM made the system have larger aberration correction or superposition ability and be more compact. By simply modifying the control matrix or the reference matrix, select correction or superposition of aberrations was realized in real time similar to a conventional adaptive optics closed-loop correction. PL function was first integrated in addition to conventional adaptive optics visual simulation. PL training undertaken with high-order aberrations correction obviously improved the visual function of adult anisometropic amblyopia. The preliminary application of high-order aberrations correction with PL training on amblyopia treatment was being validated with a large scale population, which might have great potential in amblyopia treatment and visual performance maintenance.
Visualization of system dynamics using phasegrams
Herbst, Christian T.; Herzel, Hanspeter; Švec, Jan G.; Wyman, Megan T.; Fitch, W. Tecumseh
2013-01-01
A new tool for visualization and analysis of system dynamics is introduced: the phasegram. Its application is illustrated with both classical nonlinear systems (logistic map and Lorenz system) and with biological voice signals. Phasegrams combine the advantages of sliding-window analysis (such as the spectrogram) with well-established visualization techniques from the domain of nonlinear dynamics. In a phasegram, time is mapped onto the x-axis, and various vibratory regimes, such as periodic oscillation, subharmonics or chaos, are identified within the generated graph by the number and stability of horizontal lines. A phasegram can be interpreted as a bifurcation diagram in time. In contrast to other analysis techniques, it can be automatically constructed from time-series data alone: no additional system parameter needs to be known. Phasegrams show great potential for signal classification and can act as the quantitative basis for further analysis of oscillating systems in many scientific fields, such as physics (particularly acoustics), biology or medicine. PMID:23697715
Young children's coding and storage of visual and verbal material.
Perlmutter, M; Myers, N A
1975-03-01
36 preschool children (mean age 4.2 years) were each tested on 3 recognition memory lists differing in test mode (visual only, verbal only, combined visual-verbal). For one-third of the children, original list presentation was visual only, for another third, presentation was verbal only, and the final third received combined visual-verbal presentation. The subjects generally performed at a high level of correct responding. Verbal-only presentation resulted in less correct recognition than did either visual-only or combined visual-verbal presentation. However, because performances under both visual-only and combined visual-verbal presentation were statistically comparable, and a high level of spontaneous labeling was observed when items were presented only visually, a dual-processing conceptualization of memory in 4-year-olds was suggested.
Visualization Co-Processing of a CFD Simulation
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
1999-01-01
OVERFLOW, a widely used CFD simulation code, is combined with a visualization system, pV3, to experiment with an environment for simulation/visualization co-processing on a SGI Origin 2000 computer(O2K) system. The shared memory version of the solver is used with the O2K 'pfa' preprocessor invoked to automatically discover parallelism in the source code. No other explicit parallelism is enabled. In order to study the scaling and performance of the visualization co-processing system, sample runs are made with different processor groups in the range of 1 to 254 processors. The data exchange between the visualization system and the simulation system is rapid enough for user interactivity when the problem size is small. This shared memory version of OVERFLOW, with minimal parallelization, does not scale well to an increasing number of available processors. The visualization task takes about 18 to 30% of the total processing time and does not appear to be a major contributor to the poor scaling. Improper load balancing and inter-processor communication overhead are contributors to this poor performance. Work is in progress which is aimed at obtaining improved parallel performance of the solver and removing the limitations of serial data transfer to pV3 by examining various parallelization/communication strategies, including the use of the explicit message passing.
The case against specialized visual-spatial short-term memory.
Morey, Candice C
2018-05-24
The dominant paradigm for understanding working memory, or the combination of the perceptual, attentional, and mnemonic processes needed for thinking, subdivides short-term memory (STM) according to whether memoranda are encoded in aural-verbal or visual formats. This traditional dissociation has been supported by examples of neuropsychological patients who seem to selectively lack STM for either aural-verbal, visual, or spatial memoranda, and by experimental research using dual-task methods. Though this evidence is the foundation of assumptions of modular STM systems, the case it makes for a specialized visual STM system is surprisingly weak. I identify the key evidence supporting a distinct verbal STM system-patients with apparent selective damage to verbal STM and the resilience of verbal short-term memories to general dual-task interference-and apply these benchmarks to neuropsychological and experimental investigations of visual-spatial STM. Contrary to the evidence on verbal STM, patients with apparent visual or spatial STM deficits tend to experience a wide range of additional deficits, making it difficult to conclude that a distinct short-term store was damaged. Consistently with this, a meta-analysis of dual-task visual-spatial STM research shows that robust dual-task costs are consistently observed regardless of the domain or sensory code of the secondary task. Together, this evidence suggests that positing a specialized visual STM system is not necessary. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Spiral CT scanning technique in the detection of aspiration of LEGO foreign bodies.
Applegate, K E; Dardinger, J T; Lieber, M L; Herts, B R; Davros, W J; Obuchowski, N A; Maneker, A
2001-12-01
Radiolucent foreign bodies (FBs) such as plastic objects and toys remain difficult to identify on conventional radiographs of the neck and chest. Children may present with a variety of respiratory complaints, which may or may not be due to a FB. To determine whether radiolucent FBs such as plastic LEGOs and peanuts can be seen in the tracheobronchial tree or esophagus using low-dose spiral CT, and, if visible, to determine the optimal CT imaging technique. Multiple spiral sequences were performed while varying the CT parameters and the presence and location of FBs in either the trachea or the esophagus first on a neck phantom and then a cadaver. Sequences were rated by three radiologists blinded to the presence of a FB using a single scoring system. The LEGO was well visualized in the trachea by all three readers (both lung and soft-tissue windowing: combined sensitivity 89 %, combined specificity 89 %) and to a lesser extent in the esophagus (combined sensitivity 31 %, combined specificity 100 %). The peanut was not well visualized (combined sensitivity < 35 %). The optimal technique for visualizing the LEGO was 120 kV, 90 mA, 3-mm collimation, 0.75 s/revolution, and 2.0 pitch. This allowed for coverage of the cadaver tracheobronchial tree (approximately 11 cm) in about 18 s. Although statistical power was low for detecting significant differences, all three readers noted higher average confidence ratings with lung windowing among 18 LEGO-in-trachea scans. Rapid, low-dose spiral CT may be used to visualize LEGO FBs in the airway or esophagus. Peanuts were not well visualized.
Visual gravitational motion and the vestibular system in humans
Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka
2013-01-01
The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity. PMID:24421761
Visual gravitational motion and the vestibular system in humans.
Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka
2013-12-26
The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.
Heyers, Dominik; Manns, Martina; Luksch, Harald; Güntürkün, Onur; Mouritsen, Henrik
2007-09-26
The magnetic compass of migratory birds has been suggested to be light-dependent. Retinal cryptochrome-expressing neurons and a forebrain region, "Cluster N", show high neuronal activity when night-migratory songbirds perform magnetic compass orientation. By combining neuronal tracing with behavioral experiments leading to sensory-driven gene expression of the neuronal activity marker ZENK during magnetic compass orientation, we demonstrate a functional neuronal connection between the retinal neurons and Cluster N via the visual thalamus. Thus, the two areas of the central nervous system being most active during magnetic compass orientation are part of an ascending visual processing stream, the thalamofugal pathway. Furthermore, Cluster N seems to be a specialized part of the visual wulst. These findings strongly support the hypothesis that migratory birds use their visual system to perceive the reference compass direction of the geomagnetic field and that migratory birds "see" the reference compass direction provided by the geomagnetic field.
Catecholamines alter the intrinsic variability of cortical population activity and perception
Avramiea, Arthur-Ervin; Nolte, Guido; Engel, Andreas K.; Linkenkaer-Hansen, Klaus; Donner, Tobias H.
2018-01-01
The ascending modulatory systems of the brain stem are powerful regulators of global brain state. Disturbances of these systems are implicated in several major neuropsychiatric disorders. Yet, how these systems interact with specific neural computations in the cerebral cortex to shape perception, cognition, and behavior remains poorly understood. Here, we probed into the effect of two such systems, the catecholaminergic (dopaminergic and noradrenergic) and cholinergic systems, on an important aspect of cortical computation: its intrinsic variability. To this end, we combined placebo-controlled pharmacological intervention in humans, recordings of cortical population activity using magnetoencephalography (MEG), and psychophysical measurements of the perception of ambiguous visual input. A low-dose catecholaminergic, but not cholinergic, manipulation altered the rate of spontaneous perceptual fluctuations as well as the temporal structure of “scale-free” population activity of large swaths of the visual and parietal cortices. Computational analyses indicate that both effects were consistent with an increase in excitatory relative to inhibitory activity in the cortical areas underlying visual perceptual inference. We propose that catecholamines regulate the variability of perception and cognition through dynamically changing the cortical excitation–inhibition ratio. The combined readout of fluctuations in perception and cortical activity we established here may prove useful as an efficient and easily accessible marker of altered cortical computation in neuropsychiatric disorders. PMID:29420565
Visualization of evolving laser-generated structures by frequency domain tomography
NASA Astrophysics Data System (ADS)
Chang, Yenyu; Li, Zhengyan; Wang, Xiaoming; Zgadzaj, Rafal; Downer, Michael
2011-10-01
We introduce frequency domain tomography (FDT) for single-shot visualization of time-evolving refractive index structures (e.g. laser wakefields, nonlinear index structures) moving at light-speed. Previous researchers demonstrated single-shot frequency domain holography (FDH), in which a probe-reference pulse pair co- propagates with the laser-generated structure, to obtain snapshot-like images. However, in FDH, information about the structure's evolution is averaged. To visualize an evolving structure, we use several frequency domain streak cameras (FDSCs), in each of which a probe-reference pulse pair propagates at an angle to the propagation direction of the laser-generated structure. The combination of several FDSCs constitutes the FDT system. We will present experimental results for a 4-probe FDT system that has imaged the whole-beam self-focusing of a pump pulse propagating through glass in a single laser shot. Combining temporal and angle multiplexing methods, we successfully processed data from four probe pulses in one spectrometer in a single-shot. The output of data processing is a multi-frame movie of the self- focusing pulse. Our results promise the possibility of visualizing evolving laser wakefield structures that underlie laser-plasma accelerators used for multi-GeV electron acceleration.
Anatomy and physiology of the afferent visual system.
Prasad, Sashank; Galetta, Steven L
2011-01-01
The efficient organization of the human afferent visual system meets enormous computational challenges. Once visual information is received by the eye, the signal is relayed by the retina, optic nerve, chiasm, tracts, lateral geniculate nucleus, and optic radiations to the striate cortex and extrastriate association cortices for final visual processing. At each stage, the functional organization of these circuits is derived from their anatomical and structural relationships. In the retina, photoreceptors convert photons of light to an electrochemical signal that is relayed to retinal ganglion cells. Ganglion cell axons course through the optic nerve, and their partial decussation in the chiasm brings together corresponding inputs from each eye. Some inputs follow pathways to mediate pupil light reflexes and circadian rhythms. However, the majority of inputs arrive at the lateral geniculate nucleus, which relays visual information via second-order neurons that course through the optic radiations to arrive in striate cortex. Feedback mechanisms from higher cortical areas shape the neuronal responses in early visual areas, supporting coherent visual perception. Detailed knowledge of the anatomy of the afferent visual system, in combination with skilled examination, allows precise localization of neuropathological processes and guides effective diagnosis and management of neuro-ophthalmic disorders. Copyright © 2011 Elsevier B.V. All rights reserved.
Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf
2015-01-01
To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results.
Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf
2015-01-01
Background To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Methods and Results Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results. PMID:25915061
Stereoscopic applications for design visualization
NASA Astrophysics Data System (ADS)
Gilson, Kevin J.
2007-02-01
Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.
Principal components colour display of ERTS imagery
NASA Technical Reports Server (NTRS)
Taylor, M. M.
1974-01-01
In the technique presented, colours are not derived from single bands, but rather from independent linear combinations of the bands. Using a simple model of the processing done by the visual system, three informationally independent linear combinations of the four ERTS bands are mapped onto the three visual colour dimensions of brightness, redness-greenness and blueness-yellowness. The technique permits user-specific transformations which enhance particular features, but this is not usually needed, since a single transformation provides a picture which conveys much of the information implicit in the ERTS data. Examples of experimental vector images with matched individual band images are shown.
Hadoop-based implementation of processing medical diagnostic records for visual patient system
NASA Astrophysics Data System (ADS)
Yang, Yuanyuan; Shi, Liehang; Xie, Zhe; Zhang, Jianguo
2018-03-01
We have innovatively introduced Visual Patient (VP) concept and method visually to represent and index patient imaging diagnostic records (IDR) in last year SPIE Medical Imaging (SPIE MI 2017), which can enable a doctor to review a large amount of IDR of a patient in a limited appointed time slot. In this presentation, we presented a new approach to design data processing architecture of VP system (VPS) to acquire, process and store various kinds of IDR to build VP instance for each patient in hospital environment based on Hadoop distributed processing structure. We designed this system architecture called Medical Information Processing System (MIPS) with a combination of Hadoop batch processing architecture and Storm stream processing architecture. The MIPS implemented parallel processing of various kinds of clinical data with high efficiency, which come from disparate hospital information system such as PACS, RIS LIS and HIS.
Mosaic and Concerted Evolution in the Visual System of Birds
Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N.; Moore, Bret A.; Fernández-Juricic, Esteban; Corfield, Jeremy R.; Krilow, Justin M.; Kolominsky, Jeffrey; Wylie, Douglas R.
2014-01-01
Two main models have been proposed to explain how the relative size of neural structures varies through evolution. In the mosaic evolution model, individual brain structures vary in size independently of each other, whereas in the concerted evolution model developmental constraints result in different parts of the brain varying in size in a coordinated manner. Several studies have shown variation of the relative size of individual nuclei in the vertebrate brain, but it is currently not known if nuclei belonging to the same functional pathway vary independently of each other or in a concerted manner. The visual system of birds offers an ideal opportunity to specifically test which of the two models apply to an entire sensory pathway. Here, we examine the relative size of 9 different visual nuclei across 98 species of birds. This includes data on interspecific variation in the cytoarchitecture and relative size of the isthmal nuclei, which has not been previously reported. We also use a combination of statistical analyses, phylogenetically corrected principal component analysis and evolutionary rates of change on the absolute and relative size of the nine nuclei, to test if visual nuclei evolved in a concerted or mosaic manner. Our results strongly indicate a combination of mosaic and concerted evolution (in the relative size of nine nuclei) within the avian visual system. Specifically, the relative size of the isthmal nuclei and parts of the tectofugal pathway covary across species in a concerted fashion, whereas the relative volume of the other visual nuclei measured vary independently of one another, such as that predicted by the mosaic model. Our results suggest the covariation of different neural structures depends not only on the functional connectivity of each nucleus, but also on the diversity of afferents and efferents of each nucleus. PMID:24621573
Industrial Inspection with Open Eyes: Advance with Machine Vision Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Niel, Kurt
Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less
Code of Federal Regulations, 2012 CFR
2012-10-01
... device means an electronic or electrical device used to conduct oral, written, or visual communication... equipment, or track motor car, singly or in combination with other equipment, on the track of a railroad..., roadway, signal and communication systems, electric traction systems, roadway facilities or roadway...
Code of Federal Regulations, 2014 CFR
2014-10-01
... device means an electronic or electrical device used to conduct oral, written, or visual communication... equipment, or track motor car, singly or in combination with other equipment, on the track of a railroad..., roadway, signal and communication systems, electric traction systems, roadway facilities or roadway...
Code of Federal Regulations, 2013 CFR
2013-10-01
... device means an electronic or electrical device used to conduct oral, written, or visual communication... equipment, or track motor car, singly or in combination with other equipment, on the track of a railroad..., roadway, signal and communication systems, electric traction systems, roadway facilities or roadway...
Code of Federal Regulations, 2011 CFR
2011-10-01
... device means an electronic or electrical device used to conduct oral, written, or visual communication... equipment, or track motor car, singly or in combination with other equipment, on the track of a railroad..., roadway, signal and communication systems, electric traction systems, roadway facilities or roadway...
Alvarez, George A.; Nakayama, Ken; Konkle, Talia
2016-01-01
Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing. PMID:27832600
The effect of multispectral image fusion enhancement on human efficiency.
Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M
2017-01-01
The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.
Yang, Minghui; Sun, Steven; Kostov, Yordan
2010-01-01
There is a well-recognized need for low cost biodetection technologies for resource-poor settings with minimal medical infrastructure. Lab-on-a-chip (LOC) technology has the ability to perform biological assays in such settings. The aim of this work is to develop a low cost, high-throughput detection system for the analysis of 96 samples simultaneously outside the laboratory setting. To achieve this aim, several biosensing elements were combined: a syringe operated ELISA lab-on-a-chip (ELISA-LOC) which integrates fluid delivery system into a miniature 96-well plate; a simplified non-enzymatic reporter and detection approach using a gold nanoparticle-antibody conjugate as a secondary antibody and silver enhancement of the visual signal; and Carbon nanotubes (CNT) to increase primary antibody immobilization and improve assay sensitivity. Combined, these elements obviate the need for an ELISA washer, electrical power for operation and a sophisticated detector. We demonstrate the use of the device for detection of Staphylococcal enterotoxin B, a major foodborne toxin using three modes of detection, visual detection, CCD camera and document scanner. With visual detection or using a document scanner to measure the signal, the limit of detection (LOD) was 0.5ng/ml. In addition to visual detection, for precise quantitation of signal using densitometry and a CCD camera, the LOD was 0.1ng/ml for the CCD analysis and 0.5 ng/ml for the document scanner. The observed sensitivity is in the same range as laboratory-based ELISA testing. The point of care device can analyze 96 samples simultaneously, permitting high throughput diagnostics in the field and in resource poor areas without ready access to laboratory facilities or electricity. PMID:21503269
A novel visual-inertial monocular SLAM
NASA Astrophysics Data System (ADS)
Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo
2018-02-01
With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.
Is More Better? - Night Vision Enhancement System's Pedestrian Warning Modes and Older Drivers.
Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas
2010-01-01
Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers' workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers.
CasCADe: A Novel 4D Visualization System for Virtual Construction Planning.
Ivson, Paulo; Nascimento, Daniel; Celes, Waldemar; Barbosa, Simone Dj
2018-01-01
Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.
Soldier-worn augmented reality system for tactical icon visualization
NASA Astrophysics Data System (ADS)
Roberts, David; Menozzi, Alberico; Clipp, Brian; Russler, Patrick; Cook, James; Karl, Robert; Wenger, Eric; Church, William; Mauger, Jennifer; Volpe, Chris; Argenta, Chris; Wille, Mark; Snarski, Stephen; Sherrill, Todd; Lupo, Jasper; Hobson, Ross; Frahm, Jan-Michael; Heinly, Jared
2012-06-01
This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive 'heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier's view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina - Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°×30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-on-Target) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.
Schut, Martijn J; Van der Stoep, Nathan; Postma, Albert; Van der Stigchel, Stefan
2017-06-01
To facilitate visual continuity across eye movements, the visual system must presaccadically acquire information about the future foveal image. Previous studies have indicated that visual working memory (VWM) affects saccade execution. However, the reverse relation, the effect of saccade execution on VWM load is less clear. To investigate the causal link between saccade execution and VWM, we combined a VWM task and a saccade task. Participants were instructed to remember one, two, or three shapes and performed either a No Saccade-, a Single Saccade- or a Dual (corrective) Saccade-task. The results indicate that items stored in VWM are reported less accurately if a single saccade-or a dual saccade-task is performed next to retaining items in VWM. Importantly, the loss of response accuracy for items retained in VWM by performing a saccade was similar to committing an extra item to VWM. In a second experiment, we observed no cost of executing a saccade for auditory working memory performance, indicating that executing a saccade exclusively taxes the VWM system. Our results suggest that the visual system presaccadically stores the upcoming retinal image, which has a similar VWM load as committing one extra item to memory and interferes with stored VWM content. After the saccade, the visual system can retrieve this item from VWM to evaluate saccade accuracy. Our results support the idea that VWM is a system which is directly linked to saccade execution and promotes visual continuity across saccades.
Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F
2007-01-01
Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818
Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F
2007-10-15
Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.
NASA Astrophysics Data System (ADS)
Newman, R. L.
2002-12-01
How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.
DOT National Transportation Integrated Search
1999-12-01
To achieve the goals for Advanced Traveler Information Systems (ATIS), significant information will necessarily be provided to the driver. A primary ATIS design issue is the display modality (i.e., visual, auditory, or the combination) selected for p...
Shwirl: Meaningful coloring of spectral cube data with volume rendering
NASA Astrophysics Data System (ADS)
Vohl, Dany
2017-04-01
Shwirl visualizes spectral data cubes with meaningful coloring methods. The program has been developed to investigate transfer functions, which combines volumetric elements (or voxels) to set the color, and graphics shaders, functions used to compute several properties of the final image such as color, depth, and/or transparency, as enablers for scientific visualization of astronomical data. The program uses Astropy (ascl:1304.002) to handle FITS files and World Coordinate System, Qt (and PyQt) for the user interface, and VisPy, an object-oriented Python visualization library binding onto OpenGL.
Trans-oral miniature X-ray radiation delivery system with endoscopic optical feedback.
Boese, Axel; Johnson, Fredrick; Ebert, Till; Mahmoud-Pashazadeh, Ali; Arens, Christoph; Friebe, Michael
2017-11-01
Surgery, chemo- and/or external radiation therapy are the standard therapy options for the treatment of laryngeal cancer. Trans-oral access for the surgery reduces traumata and hospitalization time. A new trend in treatment is organ-preserving surgery. To avoid regrowth of cancer, this type of surgery can be combined with radiation therapy. Since external radiation includes healthy tissue surrounding the cancerous zone, a local and direct intraoral radiation delivery would be beneficial. A general concept for a trans-oral radiation system was designed, based on clinical need identification with a medical user. A miniaturized X-ray tube was used as the radiation source for the intraoperative radiation delivery. To reduce dose distribution on healthy areas, the X-ray source was collimated by a newly designed adjustable shielding system as part of the housing. For direct optical visualization of the radiation zone, a miniature flexible endoscope was integrated into the system. The endoscopic light cone and the field of view were aligned with the zone of the collimated radiation. The intraoperative radiation system was mounted on a semi-automatic medical holder that was combined with a frontal actuator for rotational and translational movement using piezoelectric motors to provide precise placement. The entire technical set-up was tested in a simulated environment. The shielding of the X-ray source was verified by performing conventional detector-based dose measurements. The delivered dose was estimated by an ionization chamber. The adjustment of the radiation zone was performed by a manual controlling mechanism integrated into the hand piece of the device. An endoscopic fibre was also added to offer visualization and illumination of the radiation zone. The combination of the radiation system with the semi-automatic holder and actuator offered precise and stable positioning of the device in range of micrometres and will allow for future combination with a radiation planning system. The presented system was designed for radiation therapy of the oral cavity and the larynx. This first set-up tried to cover all clinical aspects that are necessary for a later use in surgery. The miniaturized X-ray tube offers the size and the power for intraoperative radiation therapy. The adjustable shielding system in combination with the holder and actuator provides a precise placement. The visualization of radiation zone allows a targeting and observation of the radiation zone.
NASA Technical Reports Server (NTRS)
Sinacori, J. B.
1980-01-01
A conceptual design of a visual system for a rotorcraft flight simulator is presented. Also, drive logic elements for a coupled motion base for such a simulator are given. The design is the result of an assessment of many potential arrangements of electro-optical elements and is a concept considered feasible for the application. The motion drive elements represent an example logic for a coupled motion base and is essentially an appeal to the designers of such logic to combine their washout and braking functions.
Augmentation of French grunt diet description using combined visual and DNA-based analyses
Hargrove, John S.; Parkyn, Daryl C.; Murie, Debra J.; Demopoulos, Amanda W.J.; Austin, James D.
2012-01-01
Trophic linkages within a coral-reef ecosystem may be difficult to discern in fish species that reside on, but do not forage on, coral reefs. Furthermore, dietary analysis of fish can be difficult in situations where prey is thoroughly macerated, resulting in many visually unrecognisable food items. The present study examined whether the inclusion of a DNA-based method could improve the identification of prey consumed by French grunt, Haemulon flavolineatum, a reef fish that possesses pharyngeal teeth and forages on soft-bodied prey items. Visual analysis indicated that crustaceans were most abundant numerically (38.9%), followed by sipunculans (31.0%) and polychaete worms (5.2%), with a substantial number of unidentified prey (12.7%). For the subset of prey with both visual and molecular data, there was a marked reduction in the number of unidentified sipunculans (visual – 31.1%, combined &ndash 4.4%), unidentified crustaceans (visual &ndash 15.6%, combined &ndash 6.7%), and unidentified taxa (visual &ndash 11.1%, combined &ndash 0.0%). Utilising results from both methodologies resulted in an increased number of prey placed at the family level (visual &ndash 6, combined &ndash 33) and species level (visual &ndash 0, combined &ndash 4). Although more costly than visual analysis alone, our study demonstrated the feasibility of DNA-based identification of visually unidentifiable prey in the stomach contents of fish.
Marzullo, Timothy Charles; Lehmkuhle, Mark J; Gage, Gregory J; Kipke, Daryl R
2010-04-01
Closed-loop neural interface technology that combines neural ensemble decoding with simultaneous electrical microstimulation feedback is hypothesized to improve deep brain stimulation techniques, neuromotor prosthetic applications, and epilepsy treatment. Here we describe our iterative results in a rat model of a sensory and motor neurophysiological feedback control system. Three rats were chronically implanted with microelectrode arrays in both the motor and visual cortices. The rats were subsequently trained over a period of weeks to modulate their motor cortex ensemble unit activity upon delivery of intra-cortical microstimulation (ICMS) of the visual cortex in order to receive a food reward. Rats were given continuous feedback via visual cortex ICMS during the response periods that was representative of the motor cortex ensemble dynamics. Analysis revealed that the feedback provided the animals with indicators of the behavioral trials. At the hardware level, this preparation provides a tractable test model for improving the technology of closed-loop neural devices.
Design, Control and in Situ Visualization of Gas Nitriding Processes
Ratajski, Jerzy; Olik, Roman; Suszko, Tomasz; Dobrodziej, Jerzy; Michalski, Jerzy
2010-01-01
The article presents a complex system of design, in situ visualization and control of the commonly used surface treatment process: the gas nitriding process. In the computer design conception, analytical mathematical models and artificial intelligence methods were used. As a result, possibilities were obtained of the poly-optimization and poly-parametric simulations of the course of the process combined with a visualization of the value changes of the process parameters in the function of time, as well as possibilities to predict the properties of nitrided layers. For in situ visualization of the growth of the nitrided layer, computer procedures were developed which make use of the results of the correlations of direct and differential voltage and time runs of the process result sensor (magnetic sensor), with the proper layer growth stage. Computer procedures make it possible to combine, in the duration of the process, the registered voltage and time runs with the models of the process. PMID:22315536
Noise and contrast comparison of visual and infrared images of hazards as seen inside an automobile
NASA Astrophysics Data System (ADS)
Meitzler, Thomas J.; Bryk, Darryl; Sohn, Eui J.; Lane, Kimberly; Bednarz, David; Jusela, Daniel; Ebenstein, Samuel; Smith, Gregory H.; Rodin, Yelena; Rankin, James S., II; Samman, Amer M.
2000-06-01
The purpose of this experiment was to quantitatively measure driver performance for detecting potential road hazards in visual and infrared (IR) imagery of road scenes containing varying combinations of contrast and noise. This pilot test is a first step toward comparing various IR and visual sensors and displays for the purpose of an enhanced vision system to go inside the driver compartment. Visible and IR road imagery obtained was displayed on a large screen and on a PC monitor and subject response times were recorded. Based on the response time, detection probabilities were computed and compared to the known time of occurrence of a driving hazard. The goal was to see what combinations of sensor, contrast and noise enable subjects to have a higher detection probability of potential driving hazards.
Haig, Jennifer; Barbeau, Martin; Ferreira, Alberto
2016-07-01
Objective Ranibizumab, an anti-vascular endothelial growth factor designed for ocular use, has been deemed cost-effective in multiple indications by several Health Technology Assessment bodies. This study assessed the cost-effectiveness of ranibizumab monotherapy or combination therapy (ranibizumab plus laser photocoagulation) compared with laser monotherapy for the treatment of visual impairment due to diabetic macular edema (DME). Methods A Markov model was developed in which patients moved between health states defined by best-corrected visual acuity (BCVA) intervals and an absorbing 'death' state. The population of interest was patients with DME due to type 1 or type 2 diabetes mellitus. Baseline characteristics were based on those of participants in the RESTORE study. Main outputs were costs (in 2013 CA$) and health outcomes (in quality-adjusted life-years [QALYs]) and the incremental cost-effectiveness ratio (ICER) was calculated. This cost-utility analysis was conducted from healthcare system and societal perspectives in Quebec. Results From a healthcare system perspective, the ICERs for ranibizumab monotherapy and combination therapy vs laser monotherapy were CA$24 494 and CA$36 414 per QALY gained, respectively. The incremental costs per year without legal blindness for ranibizumab monotherapy and combination therapy vs laser monotherapy were CA$15 822 and CA$20 616, respectively. Based on the generally accepted Canadian ICER threshold of CA$50 000 per QALY gained, ranibizumab monotherapy and combination therapy were found to be cost-effective compared with laser monotherapy. From a societal perspective, ranibizumab monotherapy and combination therapy provided greater benefits at lower costs than laser monotherapy (ranibizumab therapy dominated laser therapy). Conclusions Ranibizumab monotherapy and combination therapy resulted in increased quality-adjusted survival and time without legal blindness and lower costs from a societal perspective compared with laser monotherapy.
Visualization of fluid turbulence and acoustic cavitation during phacoemulsification.
Tognetto, Daniele; Sanguinetti, Giorgia; Sirotti, Paolo; Brezar, Edoardo; Ravalico, Giuseppe
2005-02-01
To describe a technique for visualizing fluid turbulence and cavitational energy created by ultrasonic phaco tips. University Eye Clinic of Trieste, Trieste, Italy. Generation of cavitational energy by the phaco tip was visualized using an optical test bench comprising several components. The technique uses a telescope system to expand a laser light source into a coherent, collimated beam of light with a diameter of approximately 50.0 mm. The expanded laser beam shines on the test tube containing the tip activated in a medium of water or ophthalmic viscosurgical device (OVD). Two precision optical collimators complete the optical test bench and form the system used to focus data onto a charge-coupled device television camera connected to a recorder. Images of irrigation, irrigation combined with aspiration, irrigation/aspiration, and phacosonication were obtained with the tip immersed in a tube containing water or OVD. Optical image processing enabled acoustic cavitation to be visualized during phacosonication. The system is a possible means of evaluating a single phaco apparatus power setting and comparing phaco machines and techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, W.
Building something which could be called {open_quotes}virtual reality{close_quotes} (VR) is something of a challenge, particularly when nobody really seems to agree on a definition of VR. The author wanted to combine scientific visualization with VR, resulting in an environment useful for assisting scientific research. He demonstrates the combination of VR and scientific visualization in a prototype application. The VR application constructed consists of a dataflow based system for performing scientific visualization (AVS), extensions to the system to support VR input devices and a numerical simulation ported into the dataflow environment. The VR system includes two inexpensive, off-the-shelf VR devices andmore » some custom code. A working system was assembled with about two man-months of effort. The system allows the user to specify parameters for a chemical flooding simulation as well as some viewing parameters using VR input devices, as well as view the output using VR output devices. In chemical flooding, there is a subsurface region that contains chemicals which are to be removed. Secondary oil recovery and environmental remediation are typical applications of chemical flooding. The process assumes one or more injection wells, and one or more production wells. Chemicals or water are pumped into the ground, mobilizing and displacing hydrocarbons or contaminants. The placement of the production and injection wells, and other parameters of the wells, are the most important variables in the simulation.« less
Visible digital watermarking system using perceptual models
NASA Astrophysics Data System (ADS)
Cheng, Qiang; Huang, Thomas S.
2001-03-01
This paper presents a visible watermarking system using perceptual models. %how and why A watermark image is overlaid translucently onto a primary image, for the purposes of immediate claim of copyright, instantaneous recognition of owner or creator, or deterrence to piracy of digital images or video. %perceptual The watermark is modulated by exploiting combined DCT-domain and DWT-domain perceptual models. % so that the watermark is visually uniform. The resulting watermarked image is visually pleasing and unobtrusive. The location, size and strength of the watermark vary randomly with the underlying image. The randomization makes the automatic removal of the watermark difficult even though the algorithm is known publicly but the key to the random sequence generator. The experiments demonstrate that the watermarked images have pleasant visual effect and strong robustness. The watermarking system can be used in copyright notification and protection.
Information technology aided exploration of system design spaces
NASA Technical Reports Server (NTRS)
Feather, Martin S.; Kiper, James D.; Kalafat, Selcuk
2004-01-01
We report on a practical application of information technology techniques to aid system engineers effectively explore large design spaces. We make use of heuristic search, visualization and data mining, the combination of which we have implemented wtihin a risk management tool in use at JPL and NASA.
Multimodal fusion of polynomial classifiers for automatic person recgonition
NASA Astrophysics Data System (ADS)
Broun, Charles C.; Zhang, Xiaozheng
2001-03-01
With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late integration approach, based on a probabilistic model, is employed to combine the two modalities. The system is tested on the XM2VTS database combined with AWGN in the audio domain over a range of signal-to-noise ratios.
Management of varicella zoster virus retinitis in AIDS
Moorthy, R.; Weinberg, D.; Teich, S.; Berger, B.; Minturn, J.; Kumar, S.; Rao, N.; Fowell, S.; Loose, I.; Jampol, L.
1997-01-01
AIMS/BACKGROUND—Varicella zoster virus retinitis (VZVR) in patients with AIDS, also called progressive outer retinal necrosis (PORN), is a necrotising viral retinitis which has resulted in blindness in most patients. The purposes of this study were to investigate the clinical course and visual outcome, and to determine if the choice of a systemic antiviral therapy affected the final visual outcome in patients with VZVR and AIDS. METHODS—A review of the clinical records of 20 patients with VZVR from six centres was performed. Analysis of the clinical characteristics at presentation was performed. Kruskall-Wallis non-parametric one way analysis of variance (KWAOV) of the final visual acuities of patients treated with acyclovir, ganciclovir, foscarnet, or a combination of foscarnet and ganciclovir was carried out. RESULTS—Median follow up was 6 months (range 1.3-26 months). On presentation, 14 of 20 patients (70%) had bilateral disease, and 75% (15 of 20 patients) had previous or concurrent extraocular manifestations of VZV infection. Median initial and final visual acuities were 20/40 and hand movements, respectively. Of 39 eyes involved, 19 eyes (49%) were no light perception at last follow up; 27 eyes (69%) developed rhegmatogenous retinal detachments. Patients treated with combination ganciclovir and foscarnet therapy or ganciclovir alone had significantly better final visual acuity than those treated with either acyclovir or foscarnet (KWAOV: p = 0.0051). CONCLUSIONS—This study represents the second largest series, the longest follow up, and the first analysis of visual outcomes based on medical therapy for AIDS patients with VZVR. Aggressive medical treatment with appropriate systemic antivirals may improve long term visual outcome in patients with VZVR. Acyclovir appears to be relatively ineffective in treating this disease. PMID:9135381
Wilaiprasitporn, Theerawit; Yagi, Tohru
2015-01-01
This research demonstrates the orientation-modulated attention effect on visual evoked potential. We combined this finding with our previous findings about the motion-modulated attention effect and used the result to develop novel visual stimuli for a personal identification number (PIN) application based on a brain-computer interface (BCI) framework. An electroencephalography amplifier with a single electrode channel was sufficient for our application. A computationally inexpensive algorithm and small datasets were used in processing. Seven healthy volunteers participated in experiments to measure offline performance. Mean accuracy was 83.3% at 13.9 bits/min. Encouraged by these results, we plan to continue developing the BCI-based personal identification application toward real-time systems.
Detecting Visually Observable Disease Symptoms from Faces.
Wang, Kuan; Luo, Jiebo
2016-12-01
Recent years have witnessed an increasing interest in the application of machine learning to clinical informatics and healthcare systems. A significant amount of research has been done on healthcare systems based on supervised learning. In this study, we present a generalized solution to detect visually observable symptoms on faces using semi-supervised anomaly detection combined with machine vision algorithms. We rely on the disease-related statistical facts to detect abnormalities and classify them into multiple categories to narrow down the possible medical reasons of detecting. Our method is in contrast with most existing approaches, which are limited by the availability of labeled training data required for supervised learning, and therefore offers the major advantage of flagging any unusual and visually observable symptoms.
Color Processing in the Early Visual System of Drosophila.
Schnaitmann, Christopher; Haikala, Väinö; Abraham, Eva; Oberhauser, Vitus; Thestrup, Thomas; Griesbeck, Oliver; Reiff, Dierk F
2018-01-11
Color vision extracts spectral information by comparing signals from photoreceptors with different visual pigments. Such comparisons are encoded by color-opponent neurons that are excited at one wavelength and inhibited at another. Here, we examine the circuit implementation of color-opponent processing in the Drosophila visual system by combining two-photon calcium imaging with genetic dissection of visual circuits. We report that color-opponent processing of UV short /blue and UV long /green is already implemented in R7/R8 inner photoreceptor terminals of "pale" and "yellow" ommatidia, respectively. R7 and R8 photoreceptors of the same type of ommatidia mutually inhibit each other directly via HisCl1 histamine receptors and receive additional feedback inhibition that requires the second histamine receptor Ort. Color-opponent processing at the first visual synapse represents an unexpected commonality between Drosophila and vertebrates; however, the differences in the molecular and cellular implementation suggest that the same principles evolved independently. Copyright © 2017 Elsevier Inc. All rights reserved.
Liu, Jessica L; McAnany, J Jason; Wilensky, Jacob T; Aref, Ahmad A; Vajaranant, Thasarat S
2017-06-01
To evaluate the nature and extent of letter contrast sensitivity (CS) deficits in glaucoma patients using a commercially available computer-based system (M&S Smart System II) and to compare the letter CS measurements to standard clinical measures of visual function. Ninety-four subjects with primary open-angle glaucoma participated. Each subject underwent visual acuity, letter CS, and standard automated perimetry testing (Humphrey SITA 24-2). All subjects had a best-corrected visual acuity (BCVA) of 0.3 log MAR (20/40 Snellen equivalent) or better and reliable standard automated perimetry (fixation losses, false positives, and false negatives <33%). CS functions were estimated from the letter CS and BCVA measurements. The area under the CS function (AUCSF), which is a combined index of CS and BCVA, was derived and analyzed. The mean (± SD) BCVA was 0.08±0.10 log MAR (∼20/25 Snellen equivalent), the mean CS was 1.38±0.17, and the mean Humphrey Visual Field mean deviation (HVF MD) was -7.22±8.10 dB. Letter CS and HVF MD correlated significantly (r=0.51, P<0.001). BCVA correlated significantly with letter CS (r=-0.22, P=0.03), but not with HVF MD (r=-0.12, P=0.26). A subset of the subject sample (∼20%) had moderate to no field loss (≤-6 dB MD) and minimal to no BCVA loss (≤0.3 log MAR), but had poor letter CS. AUCSF was correlated significantly with HVF MD (r=0.46, P<0.001). The present study is the first to evaluate letter CS in glaucoma using the digital M&S Smart System II display. Letter CS correlated significantly with standard HVF MD measurements, suggesting that letter CS may provide a useful adjunct test of visual function for glaucoma patients. In addition, the significant correlation between HVF MD and the combined index of CS and BCVA (AUCSF) suggests that this measure may also be useful for quantifying visual dysfunction in glaucoma patients.
Visual Bias Predicts Gait Adaptability in Novel Sensory Discordant Conditions
NASA Technical Reports Server (NTRS)
Brady, Rachel A.; Batson, Crystal D.; Peters, Brian T.; Mulavara, Ajitkumar P.; Bloomberg, Jacob J.
2010-01-01
We designed a gait training study that presented combinations of visual flow and support-surface manipulations to investigate the response of healthy adults to novel discordant sensorimotor conditions. We aimed to determine whether a relationship existed between subjects visual dependence and their postural stability and cognitive performance in a new discordant environment presented at the conclusion of training (Transfer Test). Our training system comprised a treadmill placed on a motion base facing a virtual visual scene that provided a variety of sensory challenges. Ten healthy adults completed 3 training sessions during which they walked on a treadmill at 1.1 m/s while receiving discordant support-surface and visual manipulations. At the first visit, in an analysis of normalized torso translation measured in a scene-movement-only condition, 3 of 10 subjects were classified as visually dependent. During the Transfer Test, all participants received a 2-minute novel exposure. In a combined measure of stride frequency and reaction time, the non-visually dependent subjects showed improved adaptation on the Transfer Test compared to their visually dependent counterparts. This finding suggests that individual differences in the ability to adapt to new sensorimotor conditions may be explained by individuals innate sensory biases. An accurate preflight assessment of crewmembers biases for visual dependence could be used to predict their propensities to adapt to novel sensory conditions. It may also facilitate the development of customized training regimens that could expedite adaptation to alternate gravitational environments.
Driver Distraction Using Visual-Based Sensors and Algorithms.
Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén
2016-10-28
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.
Driver Distraction Using Visual-Based Sensors and Algorithms
Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén
2016-01-01
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed. PMID:27801822
Stereoscopic display of 3D models for design visualization
NASA Astrophysics Data System (ADS)
Gilson, Kevin J.
2006-02-01
Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.
Low-Visibility Visual Simulation with Real Fog
NASA Technical Reports Server (NTRS)
Chase, Wendell D.
1982-01-01
An environmental fog simulation (EFS) attachment was developed to aid in the study of natural low-visibility visual cues and subsequently used to examine the realism effect upon the aircraft simulator visual scene. A review of the basic fog equations indicated that the two major factors must be accounted for in the simulation of low visibility-one due to atmospheric attenuation and one due to veiling luminance. These factors are compared systematically by: comparing actual measurements lo those computed from the Fog equations, and comparing runway-visual-range-related visual-scene contrast values with the calculated values. These values are also compared with the simulated equivalent equations and with contrast measurements obtained from a current electronic fog synthesizer to help identify areas in which improvements are needed. These differences in technique, the measured values, the Features of both systems, a pilot opinion survey of the EFS fog, and improvements (by combining features of both systems) that are expected to significantly increase the potential as well as flexibility for producing a very high-fidelity, low-visibility visual simulation are discussed.
Low-visibility visual simulation with real fog
NASA Technical Reports Server (NTRS)
Chase, W. D.
1981-01-01
An environmental fog simulation (EFS) attachment was developed to aid in the study of natural low-visibility visual cues and subsequently used to examine the realism effect upon the aircraft simulator visual scene. A review of the basic fog equations indicated that two major factors must be accounted for in the simulation of low visibility - one due to atmospheric attenuation and one due to veiling luminance. These factors are compared systematically by (1) comparing actual measurements to those computed from the fog equations, and (2) comparing runway-visual-range-related visual-scene contrast values with the calculated values. These values are also compared with the simulated equivalent equations and with contrast measurements obtained from a current electronic fog synthesizer to help identify areas in which improvements are needed. These differences in technique, the measured values, the features of both systems, a pilot opinion survey of the EFS fog, and improvements (by combining features of both systems) that are expected to significantly increase the potential as well as flexibility for producing a very high-fidelity low-visibility visual simulation are discussed.
Attention, Intention, and Priority in the Parietal Lobe
Bisley, James W.; Goldberg, Michael E.
2013-01-01
For many years there has been a debate about the role of the parietal lobe in the generation of behavior. Does it generate movement plans (intention) or choose objects in the environment for further processing? To answer this, we focus on the lateral intraparietal area (LIP), an area that has been shown to play independent roles in target selection for saccades and the generation of visual attention. Based on results from a variety of tasks, we propose that LIP acts as a priority map in which objects are represented by activity proportional to their behavioral priority. We present evidence to show that the priority map combines bottom-up inputs like a rapid visual response with an array of top-down signals like a saccade plan. The spatial location representing the peak of the map is used by the oculomotor system to target saccades and by the visual system to guide visual attention. PMID:20192813
GLO-STIX: Graph-Level Operations for Specifying Techniques and Interactive eXploration
Stolper, Charles D.; Kahng, Minsuk; Lin, Zhiyuan; Foerster, Florian; Goel, Aakash; Stasko, John; Chau, Duen Horng
2015-01-01
The field of graph visualization has produced a wealth of visualization techniques for accomplishing a variety of analysis tasks. Therefore analysts often rely on a suite of different techniques, and visual graph analysis application builders strive to provide this breadth of techniques. To provide a holistic model for specifying network visualization techniques (as opposed to considering each technique in isolation) we present the Graph-Level Operations (GLO) model. We describe a method for identifying GLOs and apply it to identify five classes of GLOs, which can be flexibly combined to re-create six canonical graph visualization techniques. We discuss advantages of the GLO model, including potentially discovering new, effective network visualization techniques and easing the engineering challenges of building multi-technique graph visualization applications. Finally, we implement the GLOs that we identified into the GLO-STIX prototype system that enables an analyst to interactively explore a graph by applying GLOs. PMID:26005315
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-01-01
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas. DOI: http://dx.doi.org/10.7554/eLife.15252.001 PMID:27596931
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-09-06
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas.
Novel 3D/VR interactive environment for MD simulations, visualization and analysis.
Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P
2014-12-18
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.
Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis
Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.
2014-01-01
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300
ERIC Educational Resources Information Center
Sung, Y.-T.; Hou, H.-T.; Liu, C.-K.; Chang, K.-E.
2010-01-01
Mobile devices have been increasingly utilized in informal learning because of their high degree of portability; mobile guide systems (or electronic guidebooks) have also been adopted in museum learning, including those that combine learning strategies and the general audio-visual guide systems. To gain a deeper understanding of the features and…
Qualitative GIS and the Visualization of Narrative Activity Space Data
Mennis, Jeremy; Mason, Michael J.; Cao, Yinghui
2012-01-01
Qualitative activity space data, i.e. qualitative data associated with the routine locations and activities of individuals, are recognized as increasingly useful by researchers in the social and health sciences for investigating the influence of environment on human behavior. However, there has been little research on techniques for exploring qualitative activity space data. This research illustrates the theoretical principles of combining qualitative and quantitative data and methodologies within the context of GIS, using visualization as the means of inquiry. Through the use of a prototype implementation of a visualization system for qualitative activity space data, and its application in a case study of urban youth, we show how these theoretical methodological principles are realized in applied research. The visualization system uses a variety of visual variables to simultaneously depict multiple qualitative and quantitative attributes of individuals’ activity spaces. The visualization is applied to explore the activity spaces of a sample of urban youth participating in a study on the geographic and social contexts of adolescent substance use. Examples demonstrate how the visualization may be used to explore individual activity spaces to generate hypotheses, investigate statistical outliers, and explore activity space patterns among subject subgroups. PMID:26190932
Qualitative GIS and the Visualization of Narrative Activity Space Data.
Mennis, Jeremy; Mason, Michael J; Cao, Yinghui
Qualitative activity space data, i.e. qualitative data associated with the routine locations and activities of individuals, are recognized as increasingly useful by researchers in the social and health sciences for investigating the influence of environment on human behavior. However, there has been little research on techniques for exploring qualitative activity space data. This research illustrates the theoretical principles of combining qualitative and quantitative data and methodologies within the context of GIS, using visualization as the means of inquiry. Through the use of a prototype implementation of a visualization system for qualitative activity space data, and its application in a case study of urban youth, we show how these theoretical methodological principles are realized in applied research. The visualization system uses a variety of visual variables to simultaneously depict multiple qualitative and quantitative attributes of individuals' activity spaces. The visualization is applied to explore the activity spaces of a sample of urban youth participating in a study on the geographic and social contexts of adolescent substance use. Examples demonstrate how the visualization may be used to explore individual activity spaces to generate hypotheses, investigate statistical outliers, and explore activity space patterns among subject subgroups.
A novel hybrid auditory BCI paradigm combining ASSR and P300.
Kaongoen, Netiwit; Jo, Sungho
2017-03-01
Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.
Integration of real-time 3D capture, reconstruction, and light-field display
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
Kramer, Edgar R.
2015-01-01
Background & Aims The brain dopaminergic (DA) system is involved in fine tuning many behaviors and several human diseases are associated with pathological alterations of the DA system such as Parkinson’s disease (PD) and drug addiction. Because of its complex network integration, detailed analyses of physiological and pathophysiological conditions are only possible in a whole organism with a sophisticated tool box for visualization and functional modification. Methods & Results Here, we have generated transgenic mice expressing the tetracycline-regulated transactivator (tTA) or the reverse tetracycline-regulated transactivator (rtTA) under control of the tyrosine hydroxylase (TH) promoter, TH-tTA (tet-OFF) and TH-rtTA (tet-ON) mice, to visualize and genetically modify DA neurons. We show their tight regulation and efficient use to overexpress proteins under the control of tet-responsive elements or to delete genes of interest with tet-responsive Cre. In combination with mice encoding tet-responsive luciferase, we visualized the DA system in living mice progressively over time. Conclusion These experiments establish TH-tTA and TH-rtTA mice as a powerful tool to generate and monitor mouse models for DA system diseases. PMID:26291828
NASA Astrophysics Data System (ADS)
Hoteit, I.; Hollt, T.; Hadwiger, M.; Knio, O. M.; Gopalakrishnan, G.; Zhan, P.
2016-02-01
Ocean reanalyses and forecasts are nowadays generated by combining ensemble simulations with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. We present an approach using probability-weighted piecewise particle trajectories to allow for interactive probability mapping. This is achieved by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next cycle. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates. The technique is integrated in an interactive visualization system that enables the visual analysis of the particle traces side by side with other forecast variables, such as the sea surface height, and their corresponding behavior over time. By harnessing the power of modern graphics processing units (GPUs) for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real-time, view specific parameter settings or simulation models and move between different spatial or temporal regions without delay. In addition our system provides advanced visualizations to highlight the uncertainty, or show the complete distribution of the simulations at user-defined positions over the complete time series of the domain.
Flies and humans share a motion estimation strategy that exploits natural scene statistics
Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.
2014-01-01
Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225
The uncrowded window of object recognition
Pelli, Denis G; Tillman, Katharine A
2009-01-01
It is now emerging that vision is usually limited by object spacing rather than size. The visual system recognizes an object by detecting and then combining its features. ‘Crowding’ occurs when objects are too close together and features from several objects are combined into a jumbled percept. Here, we review the explosion of studies on crowding—in grating discrimination, letter and face recognition, visual search, selective attention, and reading—and find a universal principle, the Bouma law. The critical spacing required to prevent crowding is equal for all objects, although the effect is weaker between dissimilar objects. Furthermore, critical spacing at the cortex is independent of object position, and critical spacing at the visual field is proportional to object distance from fixation. The region where object spacing exceeds critical spacing is the ‘uncrowded window’. Observers cannot recognize objects outside of this window and its size limits the speed of reading and search. PMID:18828191
Rethinking Visual Analytics for Streaming Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris
In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive, complex, incomplete, and uncertain in scenarios requiring human judgment.« less
SocialMood: an information visualization tool to measure the mood of the people in social networks
NASA Astrophysics Data System (ADS)
Amorim, Guilherme; Franco, Roberto; Moraes, Rodolfo; Figueiredo, Bruno; Miranda, João.; Dobrões, José; Afonso, Ricardo; Meiguins, Bianchi
2013-12-01
Based on the arena of social networks, the tool developed in this study aims to identify trends mood among undergraduate students. Combining the methodology Self-Assessment Manikin (SAM), which originated in the field of Psychology, the system filters the content provided on the Web and isolates certain words, establishing a range of values as perceived positive, negative or neutral. A Big Data summarizing the results, assisting in the construction and visualization of behavioral profiles generic, so we have a guideline for the development of information visualization tools for social networks.
NASA Astrophysics Data System (ADS)
Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard
2006-05-01
A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.
A novel visual saliency analysis model based on dynamic multiple feature combination strategy
NASA Astrophysics Data System (ADS)
Lv, Jing; Ye, Qi; Lv, Wen; Zhang, Libao
2017-06-01
The human visual system can quickly focus on a small number of salient objects. This process was known as visual saliency analysis and these salient objects are called focus of attention (FOA). The visual saliency analysis mechanism can be used to extract the salient regions and analyze saliency of object in an image, which is time-saving and can avoid unnecessary costs of computing resources. In this paper, a novel visual saliency analysis model based on dynamic multiple feature combination strategy is introduced. In the proposed model, we first generate multi-scale feature maps of intensity, color and orientation features using Gaussian pyramids and the center-surround difference. Then, we evaluate the contribution of all feature maps to the saliency map according to the area of salient regions and their average intensity, and attach different weights to different features according to their importance. Finally, we choose the largest salient region generated by the region growing method to perform the evaluation. Experimental results show that the proposed model cannot only achieve higher accuracy in saliency map computation compared with other traditional saliency analysis models, but also extract salient regions with arbitrary shapes, which is of great value for the image analysis and understanding.
What triggers catch-up saccades during visual tracking?
de Brouwer, Sophie; Yuksel, Demet; Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe
2002-03-01
When tracking moving visual stimuli, primates orient their visual axis by combining two kinds of eye movements, smooth pursuit and saccades, that have very different dynamics. Yet, the mechanisms that govern the decision to switch from one type of eye movement to the other are still poorly understood, even though they could bring a significant contribution to the understanding of how the CNS combines different kinds of control strategies to achieve a common motor and sensory goal. In this study, we investigated the oculomotor responses to a large range of different combinations of position error and velocity error during visual tracking of moving stimuli in humans. We found that the oculomotor system uses a prediction of the time at which the eye trajectory will cross the target, defined as the "eye crossing time" (T(XE)). The eye crossing time, which depends on both position error and velocity error, is the criterion used to switch between smooth and saccadic pursuit, i.e., to trigger catch-up saccades. On average, for T(XE) between 40 and 180 ms, no saccade is triggered and target tracking remains purely smooth. Conversely, when T(XE) becomes smaller than 40 ms or larger than 180 ms, a saccade is triggered after a short latency (around 125 ms).
Character complexity and redundancy in writing systems over human history.
Changizi, Mark A; Shimojo, Shinsuke
2005-02-07
A writing system is a visual notation system wherein a repertoire of marks, or strokes, is used to build a repertoire of characters. Are there any commonalities across writing systems concerning the rules governing how strokes combine into characters; commonalities that might help us identify selection pressures on the development of written language? In an effort to answer this question we examined how strokes combine to make characters in more than 100 writing systems over human history, ranging from about 10 to 200 characters,and including numerals, abjads, abugidas, alphabets and syllabaries from five major taxa: Ancient Near-Eastern, European, Middle Eastern, South Asian, Southeast Asian. We discovered underlying similarities in two fundamental respects. (i) The number of strokes per characters is approximately three, independent of the number of characters in the writing system; numeral systems are the exception, having on average only two strokes per character. (ii) Characters are ca. 50% redundant, independent of writing system size; intuitively, this means that acharacter's identity can be determined even when half of its strokes are removed. Because writing systems are under selective pressure to have characters that are easy for the visual system to recognize and for the motor system to write, these fundamental commonalities may be a fingerprint of mechanisms underlying the visuo-motor system.
Character complexity and redundancy in writing systems over human history
Changizi, Mark A.; Shimojo, Shinsuke
2005-01-01
A writing system is a visual notation system wherein a repertoire of marks, or strokes, is used to build a repertoire of characters. Are there any commonalities across writing systems concerning the rules governing how strokes combine into characters; commonalities that might help us identify selection pressures on the development of written language? In an effort to answer this question we examined how strokes combine to make characters in more than 100 writing systems over human history, ranging from about 10 to 200 characters, and including numerals, abjads, abugidas, alphabets and syllabaries from five major taxa: Ancient Near-Eastern, European, Middle Eastern, South Asian, Southeast Asian. We discovered underlying similarities in two fundamental respects.The number of strokes per characters is approximately three, independent of the number of characters in the writing system; numeral systems are the exception, having on average only two strokes per character.Characters are ca. 50% redundant, independent of writing system size; intuitively, this means that a character’s identity can be determined even when half of its strokes are removed.Because writing systems are under selective pressure to have characters that are easy for the visual system to recognize and for the motor system to write, these fundamental commonalities may be a fingerprint of mechanisms underlying the visuo–motor system. PMID:15705551
MEMS-based system and image processing strategy for epiretinal prosthesis.
Xia, Peng; Hu, Jie; Qi, Jin; Gu, Chaochen; Peng, Yinghong
2015-01-01
Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.
Eddy Current System for Material Inspection and Flaw Visualization
NASA Technical Reports Server (NTRS)
Bachnak, R.; King, S.; Maeger, W.; Nguyen, T.
2007-01-01
Eddy current methods have been successfully used in a variety of non-destructive evaluation applications including detection of cracks, measurements of material thickness, determining metal thinning due to corrosion, measurements of coating thickness, determining electrical conductivity, identification of materials, and detection of corrosion in heat exchanger tubes. This paper describes the development of an eddy current prototype that combines positional and eddy-current data to produce a C-scan of tested material. The preliminary system consists of an eddy current probe, a position tracking mechanism, and basic data visualization capability. Initial test results of the prototype are presented in this paper.
SmartAdP: Visual Analytics of Large-scale Taxi Trajectories for Selecting Billboard Locations.
Liu, Dongyu; Weng, Di; Li, Yuhong; Bao, Jie; Zheng, Yu; Qu, Huamin; Wu, Yingcai
2017-01-01
The problem of formulating solutions immediately and comparing them rapidly for billboard placements has plagued advertising planners for a long time, owing to the lack of efficient tools for in-depth analyses to make informed decisions. In this study, we attempt to employ visual analytics that combines the state-of-the-art mining and visualization techniques to tackle this problem using large-scale GPS trajectory data. In particular, we present SmartAdP, an interactive visual analytics system that deals with the two major challenges including finding good solutions in a huge solution space and comparing the solutions in a visual and intuitive manner. An interactive framework that integrates a novel visualization-driven data mining model enables advertising planners to effectively and efficiently formulate good candidate solutions. In addition, we propose a set of coupled visualizations: a solution view with metaphor-based glyphs to visualize the correlation between different solutions; a location view to display billboard locations in a compact manner; and a ranking view to present multi-typed rankings of the solutions. This system has been demonstrated using case studies with a real-world dataset and domain-expert interviews. Our approach can be adapted for other location selection problems such as selecting locations of retail stores or restaurants using trajectory data.
User-assisted video segmentation system for visual communication
NASA Astrophysics Data System (ADS)
Wu, Zhengping; Chen, Chun
2002-01-01
Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.
a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset
NASA Astrophysics Data System (ADS)
Zhou, Y. K.
2018-05-01
Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.
Boos, Amy; Qiu, Qinyin; Fluet, Gerard G; Adamovich, Sergei V
2011-01-01
This study describes the design and feasibility testing of a hand rehabilitation system that provides haptic assistance for hand opening in moderate to severe hemiplegia while subjects attempt to perform bilateral hand movements. A cable-actuated exoskeleton robot assists the subjects in performing impaired finger movements but is controlled by movement of the unimpaired hand. In an attempt to combine the neurophysiological stimuli of bilateral movement and action observation during training, visual feedback of the impaired hand is replaced by feedback of the unimpaired hand, either by using a sagittaly oriented mirror or a virtual reality setup with a pair of virtual hands presented on a flat screen controlled with movement of the unimpaired hand, providing a visual image of their paretic hand moving normally. Joint angles for both hands are measured using data gloves. The system is programmed to maintain a symmetrical relationship between the two hands as they respond to commands to open and close simultaneously. Three persons with moderate to severe hemiplegia secondary to stroke trained with the system for eight, 30 to 60 minute sessions without adverse events. Each demonstrated positive motor adaptations to training. The system was well tolerated by persons with moderate to severe upper extremity hemiplegia. Further testing of its effects on motor ability with a broader range of clinical presentations is indicated.
Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading
O’Sullivan, Aisling E.; Crosse, Michael J.; Di Liberto, Giovanni M.; Lalor, Edmund C.
2017-01-01
Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. PMID:28123363
Mixed Initiative Visual Analytics Using Task-Driven Recommendations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kristin A.; Cramer, Nicholas O.; Israel, David
2015-12-07
Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying modelsmore » of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.« less
A Microparticle/Hydrogel Combination Drug-Delivery System for Sustained Release of Retinoids
Gao, Song-Qi; Maeda, Tadao; Okano, Kiichiro; Palczewski, Krzysztof
2012-01-01
Purpose. To design and develop a drug-delivery system containing a combination of poly(d,l-lactide-co-glycolide) (PLGA) microparticles and alginate hydrogel for sustained release of retinoids to treat retinal blinding diseases that result from an inadequate supply of retinol and generation of 11-cis-retinal. Methods. To study drug release in vivo, either the drug-loaded microparticle–hydrogel combination was injected subcutaneously or drug-loaded microparticles were injected intravitreally into Lrat−/− mice. Orally administered 9-cis-retinoids were used for comparison and drug concentrations in plasma were determined by HPLC. Electroretinography (ERG) and both chemical and histologic analyses were used to evaluate drug effects on visual function and morphology. Results. Lrat−/− mice demonstrated sustained drug release from the microparticle/hydrogel combination that lasted 4 weeks after subcutaneous injection. Drug concentrations in plasma of the control group treated with the same oral dose rose to higher levels for 6−7 hours but then dropped markedly by 24 hours. Significantly increased ERG responses and a markedly improved retinal pigmented epithelium (RPE)–rod outer segment (ROS) interface were observed after subcutaneous injection of the drug-loaded delivery combination. Intravitreal injection of just 2% of the systemic dose of drug-loaded microparticles provided comparable therapeutic efficacy. Conclusions. Sustained release of therapeutic levels of 9-cis-retinoids was achieved in Lrat−/− mice by subcutaneous injection in a microparticle/hydrogel drug-delivery system. Both subcutaneous and intravitreal injections of drug-loaded microparticles into Lrat−/− mice improved visual function and retinal structure. PMID:22918645
Oculo-vestibular recoupling using galvanic vestibular stimulation to mitigate simulator sickness.
Cevette, Michael J; Stepanek, Jan; Cocco, Daniela; Galea, Anna M; Pradhan, Gaurav N; Wagner, Linsey S; Oakley, Sarah R; Smith, Benn E; Zapala, David A; Brookler, Kenneth H
2012-06-01
Despite improvement in the computational capabilities of visual displays in flight simulators, intersensory visual-vestibular conflict remains the leading cause of simulator sickness (SS). By using galvanic vestibular stimulation (GVS), the vestibular system can be synchronized with a moving visual field in order to lessen the mismatch of sensory inputs thought to result in SS. A multisite electrode array was used to deliver combinations of GVS in 21 normal subjects. Optimal electrode combinations were identified and used to establish GVS dose-response predictions for the perception of roll, pitch, and yaw. Based on these data, an algorithm was then implemented in flight simulator hardware in order to synchronize visual and GVS-induced vestibular sensations (oculo-vestibular-recoupled or OVR simulation). Subjects were then randomly exposed to flight simulation either with or without OVR simulation. A self-report SS checklist was administered to all subjects after each session. An overall SS score was calculated for each category of symptoms for both groups. The analysis of GVS stimulation data yielded six unique combinations of electrode positions inducing motion perceptions in the three rotational axes. This provided the algorithm used for OVR simulation. The overall SS scores for gastrointestinal, central, and peripheral categories were 17%, 22.4%, and 20% for the Control group and 6.3%, 20%, and 8% for the OVR group, respectively. When virtual head signals produced by GVS are synchronized to the speed and direction of a moving visual field, manifestations of induced SS in a cockpit flight simulator are significantly reduced.
A systematic comparison between visual cues for boundary detection.
Mély, David A; Kim, Junkyung; McGill, Mason; Guo, Yuliang; Serre, Thomas
2016-03-01
The detection of object boundaries is a critical first step for many visual processing tasks. Multiple cues (we consider luminance, color, motion and binocular disparity) available in the early visual system may signal object boundaries but little is known about their relative diagnosticity and how to optimally combine them for boundary detection. This study thus aims at understanding how early visual processes inform boundary detection in natural scenes. We collected color binocular video sequences of natural scenes to construct a video database. Each scene was annotated with two full sets of ground-truth contours (one set limited to object boundaries and another set which included all edges). We implemented an integrated computational model of early vision that spans all considered cues, and then assessed their diagnosticity by training machine learning classifiers on individual channels. Color and luminance were found to be most diagnostic while stereo and motion were least. Combining all cues yielded a significant improvement in accuracy beyond that of any cue in isolation. Furthermore, the accuracy of individual cues was found to be a poor predictor of their unique contribution for the combination. This result suggested a complex interaction between cues, which we further quantified using regularization techniques. Our systematic assessment of the accuracy of early vision models for boundary detection together with the resulting annotated video dataset should provide a useful benchmark towards the development of higher-level models of visual processing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sensory convergence in the parieto-insular vestibular cortex
Shinder, Michael E.
2014-01-01
Vestibular signals are pervasive throughout the central nervous system, including the cortex, where they likely play different roles than they do in the better studied brainstem. Little is known about the parieto-insular vestibular cortex (PIVC), an area of the cortex with prominent vestibular inputs. Neural activity was recorded in the PIVC of rhesus macaques during combinations of head, body, and visual target rotations. Activity of many PIVC neurons was correlated with the motion of the head in space (vestibular), the twist of the neck (proprioceptive), and the motion of a visual target, but was not associated with eye movement. PIVC neurons responded most commonly to more than one stimulus, and responses to combined movements could often be approximated by a combination of the individual sensitivities to head, neck, and target motion. The pattern of visual, vestibular, and somatic sensitivities on PIVC neurons displayed a continuous range, with some cells strongly responding to one or two of the stimulus modalities while other cells responded to any type of motion equivalently. The PIVC contains multisensory convergence of self-motion cues with external visual object motion information, such that neurons do not represent a specific transformation of any one sensory input. Instead, the PIVC neuron population may define the movement of head, body, and external visual objects in space and relative to one another. This comparison of self and external movement is consistent with insular cortex functions related to monitoring and explains many disparate findings of previous studies. PMID:24671533
NASA Astrophysics Data System (ADS)
France, Lydéric; Nicollet, Christian
2010-06-01
MetaRep is a program based on our earlier program CMAS 3D. It is developed in MATLAB ® script. MetaRep objectives are to visualize and project major element compositions of mafic and pelitic rocks and their minerals in the pseudo-quaternary projections of the ACF-S, ACF-N, CMAS, AFM-K, AFM-S and AKF-S systems. These six systems are commonly used to describe metamorphic mineral assemblages and magmatic evolutions. Each system, made of four apices, can be represented in a tetrahedron that can be visualized in three dimensions with MetaRep; the four tetrahedron apices represent oxides or combination of oxides that define the composition of the projected rock or mineral. The three-dimensional representation allows one to obtain a better understanding of the topology of the relationships between the rocks and minerals and relations. From these systems, MetaRep can also project data in ternary plots (for example, the ACF, AFM and AKF ternary projections can be generated). A functional interface makes it easy to use and does not require any knowledge of MATLAB ® programming. To facilitate the use, MetaRep loads, from the main interface, data compiled in a Microsoft Excel ™ spreadsheet. Although useful for scientific research, the program is also a powerful tool for teaching. We propose an application example that, by using two combined systems (ACF-S and ACF-N), provides strong confirmation in the petrological interpretation.
An Approach of Web-based Point Cloud Visualization without Plug-in
NASA Astrophysics Data System (ADS)
Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei
2016-11-01
With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.
Pinky Extension as a Phonestheme in Mongolian Sign Language
ERIC Educational Resources Information Center
Healy, Christina
2011-01-01
Mongolian Sign Language (MSL) is a visual-gestural language that developed from multiple languages interacting as a result of both geographic proximity and political relations and of the natural development of a communication system by deaf community members. Similar to the phonological systems of other signed languages, MSL combines handshapes,…
Heydrich, Lukas; Aspell, Jane Elizabeth; Marillier, Guillaume; Lavanchy, Tom; Herbelin, Bruno; Blanke, Olaf
2018-06-18
Prominent theories highlight the importance of bodily perception for self-consciousness, but it is currently not known whether this is based on interoceptive or exteroceptive signals or on integrated signals from these anatomically distinct systems, nor where in the brain such integration might occur. To investigate this, we measured brain activity during the recently described 'cardio-visual full body illusion' which combines interoceptive and exteroceptive signals, by providing participants with visual exteroceptive information about their heartbeat in the form of a periodically illuminated silhouette outlining a video image of the participant's body and flashing in synchrony with their heartbeat. We found, as also reported previously, that synchronous cardio-visual signals increased self-identification with the virtual body. Here we further investigated whether experimental changes in self-consciousness during this illusion are accompanied by activity changes in somatosensory cortex by recording somatosensory evoked potentials (SEPs). We show that a late somatosensory evoked potential component (P45) reflects the illusory self-identification with a virtual body. These data demonstrate that interoceptive and exteroceptive signals can be combined to modulate activity in parietal somatosensory cortex.
Hayashi, Yuichiro; Ishii, Shin; Urakubo, Hidetoshi
2014-01-01
Human observers perceive illusory rotations after the disappearance of circularly repeating patches containing dark-to-light luminance. This afterimage rotation is a very powerful phenomenon, but little is known about the mechanisms underlying it. Here, we use a computational model to show that the afterimage rotation can be explained by a combination of fast light adaptation and the physiological architecture of the early visual system, consisting of ON- and OFF-type visual pathways. In this retinal ON/OFF model, the afterimage rotation appeared as a rotation of focus lines of retinal ON/OFF responses. Focus lines rotated clockwise on a light background, but counterclockwise on a dark background. These findings were consistent with the results of psychophysical experiments, which were also performed by us. Additionally, the velocity of the afterimage rotation was comparable with that observed in our psychophysical experiments. These results suggest that the early visual system (including the retina) is responsible for the generation of the afterimage rotation, and that this illusory rotation may be systematically misinterpreted by our high-level visual system. PMID:25517906
Alphonsa, Sushma; Dai, Boyi; Benham-Deal, Tami; Zhu, Qin
2016-01-01
The speed-accuracy trade-off is a fundamental movement problem that has been extensively investigated. It has been established that the speed at which one can move to tap targets depends on how large the targets are and how far they are apart. These spatial properties of the targets can be quantified by the index of difficulty (ID). Two visual illusions are known to affect the perception of target size and movement amplitude: the Ebbinghaus illusion and Muller-Lyer illusion. We created visual images that combined these two visual illusions to manipulate the perceived ID, and then examined people's visual perception of the targets in illusory context as well as their performance in tapping those targets in both discrete and continuous manners. The findings revealed that the combined visual illusions affected the perceived ID similarly in both discrete and continuous judgment conditions. However, the movement outcomes were affected by the combined visual illusions according to the tapping mode. In discrete tapping, the combined visual illusions affected both movement accuracy and movement amplitude such that the effective ID resembled the perceived ID. In continuous tapping, none of the movement outcomes were affected by the combined visual illusions. Participants tapped the targets with higher speed and accuracy in all visual conditions. Based on these findings, we concluded that distinct visual-motor control mechanisms were responsible for execution of discrete and continuous Fitts' tapping. Although discrete tapping relies on allocentric information (object-centered) to plan for action, continuous tapping relies on egocentric information (self-centered) to control for action. The planning-control model for rapid aiming movements is supported.
Visualization of tandem repeat mutagenesis in Bacillus subtilis.
Dormeyer, Miriam; Lentes, Sabine; Ballin, Patrick; Wilkens, Markus; Klumpp, Stefan; Kohlheyer, Dietrich; Stannek, Lorena; Grünberger, Alexander; Commichau, Fabian M
2018-03-01
Mutations are crucial for the emergence and evolution of proteins with novel functions, and thus for the diversity of life. Tandem repeats (TRs) are mutational hot spots that are present in the genomes of all organisms. Understanding the molecular mechanism underlying TR mutagenesis at the level of single cells requires the development of mutation reporter systems. Here, we present a mutation reporter system that is suitable to visualize mutagenesis of TRs occurring in single cells of the Gram-positive model bacterium Bacillus subtilis using microfluidic single-cell cultivation. The system allows measuring the elimination of TR units due to growth rate recovery. The cultivation of bacteria carrying the mutation reporter system in microfluidic chambers allowed us for the first time to visualize the emergence of a specific mutation at the level of single cells. The application of the mutation reporter system in combination with microfluidics might be helpful to elucidate the molecular mechanism underlying TR (in)stability in bacteria. Moreover, the mutation reporter system might be useful to assess whether mutations occur in response to nutrient starvation. Copyright © 2018 Elsevier B.V. All rights reserved.
The Digital Space Shuttle, 3D Graphics, and Knowledge Management
NASA Technical Reports Server (NTRS)
Gomez, Julian E.; Keller, Paul J.
2003-01-01
The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.
High performance geospatial and climate data visualization using GeoJS
NASA Astrophysics Data System (ADS)
Chaudhary, A.; Beezley, J. D.
2015-12-01
GeoJS (https://github.com/OpenGeoscience/geojs) is an open-source library developed to support interactive scientific and geospatial visualization of climate and earth science datasets in a web environment. GeoJS has a convenient application programming interface (API) that enables users to harness the fast performance of WebGL and Canvas 2D APIs with sophisticated Scalable Vector Graphics (SVG) features in a consistent and convenient manner. We started the project in response to the need for an open-source JavaScript library that can combine traditional geographic information systems (GIS) and scientific visualization on the web. Many libraries, some of which are open source, support mapping or other GIS capabilities, but lack the features required to visualize scientific and other geospatial datasets. For instance, such libraries are not be capable of rendering climate plots from NetCDF files, and some libraries are limited in regards to geoinformatics (infovis in a geospatial environment). While libraries such as d3.js are extremely powerful for these kinds of plots, in order to integrate them into other GIS libraries, the construction of geoinformatics visualizations must be completed manually and separately, or the code must somehow be mixed in an unintuitive way.We developed GeoJS with the following motivations:• To create an open-source geovisualization and GIS library that combines scientific visualization with GIS and informatics• To develop an extensible library that can combine data from multiple sources and render them using multiple backends• To build a library that works well with existing scientific visualizations tools such as VTKWe have successfully deployed GeoJS-based applications for multiple domains across various projects. The ClimatePipes project funded by the Department of Energy, for example, used GeoJS to visualize NetCDF datasets from climate data archives. Other projects built visualizations using GeoJS for interactively exploring data and analysis regarding 1) the human trafficking domain, 2) New York City taxi drop-offs and pick-ups, and 3) the Ebola outbreak. GeoJS supports advanced visualization features such as picking and selecting, as well as clustering. It also supports 2D contour plots, vector plots, heat maps, and geospatial graphs.
Measuring and Predicting Tag Importance for Image Retrieval.
Li, Shangwen; Purushotham, Sanjay; Chen, Chen; Ren, Yuzhuo; Kuo, C-C Jay
2017-12-01
Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choo, Jaegul; Kim, Hannah; Clarkson, Edward
In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less
Choo, Jaegul; Kim, Hannah; Clarkson, Edward; ...
2018-01-31
In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less
Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui
2017-12-01
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
Dynamic wake prediction and visualization with uncertainty analysis
NASA Technical Reports Server (NTRS)
Holforty, Wendy L. (Inventor); Powell, J. David (Inventor)
2005-01-01
A dynamic wake avoidance system utilizes aircraft and atmospheric parameters readily available in flight to model and predict airborne wake vortices in real time. A novel combination of algorithms allows for a relatively simple yet robust wake model to be constructed based on information extracted from a broadcast. The system predicts the location and movement of the wake based on the nominal wake model and correspondingly performs an uncertainty analysis on the wake model to determine a wake hazard zone (no fly zone), which comprises a plurality of wake planes, each moving independently from another. The system selectively adjusts dimensions of each wake plane to minimize spatial and temporal uncertainty, thereby ensuring that the actual wake is within the wake hazard zone. The predicted wake hazard zone is communicated in real time directly to a user via a realistic visual representation. In an example, the wake hazard zone is visualized on a 3-D flight deck display to enable a pilot to visualize or see a neighboring aircraft as well as its wake. The system substantially enhances the pilot's situational awareness and allows for a further safe decrease in spacing, which could alleviate airport and airspace congestion.
Combined influence of visual scene and body tilt on arm pointing movements: gravity matters!
Scotto Di Cesare, Cécile; Sarlegna, Fabrice R; Bourdin, Christophe; Mestre, Daniel R; Bringoux, Lionel
2014-01-01
Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., 'combined' tilts equal to the sum of 'single' tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues.
Recent results in visual servoing
NASA Astrophysics Data System (ADS)
Chaumette, François
2008-06-01
Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.
Training Modalities to Increase Sensorimotor Adaptability
NASA Technical Reports Server (NTRS)
Bloomberg, J. J.; Mulavara, A. P.; Peters, B. T.; Brady, R.; Audas, C.; Cohen, H. S.
2009-01-01
During the acute phase of adaptation to novel gravitational environments, sensorimotor disturbances have the potential to disrupt the ability of astronauts to perform required mission tasks. The goal of our current series of studies is develop a sensorimotor adaptability (SA) training program designed to facilitate recovery of functional capabilities when astronauts transition to different gravitational environments. The project has conducted a series of studies investigating the efficacy of treadmill training combined with a variety of sensory challenges (incongruent visual input, support surface instability) designed to increase adaptability. SA training using a treadmill combined with exposure to altered visual input was effective in producing increased adaptability in a more complex over-ground ambulatory task on an obstacle course. This confirms that for a complex task like walking, treadmill training contains enough of the critical features of overground walking to be an effective training modality. SA training can be optimized by using a periodized training schedule. Test sessions that each contain short-duration exposures to multiple perturbation stimuli allows subjects to acquire a greater ability to rapidly reorganize appropriate response strategies when encountering a novel sensory environment. Using a treadmill mounted on top of a six degree-of-freedom motion base platform we investigated locomotor training responses produced by subjects introduced to a dynamic walking surface combined with alterations in visual flow. Subjects who received this training had improved locomotor performance and faster reaction times when exposed to the novel sensory stimuli compared to control subjects. Results also demonstrate that individual sensory biases (i.e. increased visual dependency) can predict adaptive responses to novel sensory environments suggesting that individual training prescription can be developed to enhance adaptability. These data indicate that SA training can be effectively integrated with treadmill exercise and optimized to provide a unique system that combines multiple training requirements in a single countermeasure system. Learning Objectives: The development of a new countermeasure approach that enhances sensorimotor adaptability will be discussed.
Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai
2012-01-01
In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391
Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai
2012-01-01
In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.
RGB-D SLAM Combining Visual Odometry and Extended Information Filter
Zhang, Heng; Liu, Yanli; Tan, Jindong; Xiong, Naixue
2015-01-01
In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. PMID:26263990
Steato-Score: Non-Invasive Quantitative Assessment of Liver Fat by Ultrasound Imaging.
Di Lascio, Nicole; Avigo, Cinzia; Salvati, Antonio; Martini, Nicola; Ragucci, Monica; Monti, Serena; Prinster, Anna; Chiappino, Dante; Mancini, Marcello; D'Elia, Domenico; Ghiadoni, Lorenzo; Bonino, Ferruccio; Brunetto, Maurizia R; Faita, Francesco
2018-05-04
Non-alcoholic fatty liver disease is becoming a global epidemic. The aim of this study was to develop a system for assessing liver fat content based on ultrasound images. Magnetic resonance spectroscopy measurements were obtained in 61 patients and the controlled attenuation parameter in 54. Ultrasound images were acquired for all 115 participants and used to calculate the hepatic/renal ratio, hepatic/portal vein ratio, attenuation rate, diaphragm visualization and portal vein wall visualization. The Steato-score was obtained by combining these five parameters. Magnetic resonance spectroscopy measurements were significantly correlated with hepatic/renal ratio, hepatic/portal vein ratio, attenuation rate, diaphragm visualization and portal vein wall visualization; Steato-score was dependent on hepatic/renal ratio, attenuation rate and diaphragm visualization. Area under the receiver operating characteristic curve was equal to 0.98, with 89% sensitivity and 94% specificity. Controlled attenuation parameter values were significantly correlated with hepatic/renal ratio, attenuation rate, diaphragm visualization and Steato-score; the area under the curve was 0.79. This system could be a valid alternative as a non-invasive, simple and inexpensive assessment of intrahepatic fat. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Is More Better? — Night Vision Enhancement System’s Pedestrian Warning Modes and Older Drivers
Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas
2010-01-01
Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers’ workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers. PMID:21050616
Neuromorphic audio-visual sensor fusion on a sound-localizing robot.
Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André
2012-01-01
This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.
Helicopter flight simulation motion platform requirements
NASA Astrophysics Data System (ADS)
Schroeder, Jeffery Allyn
Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.
Tan, Bingyao; Mason, Erik; MacLellan, Benjamin; Bizheva, Kostadinka K
2017-03-01
To correlate visually evoked functional and blood flow changes in the rat retina measured simultaneously with a combined optical coherence tomography and electroretinography system (OCT+ERG). Male Brown Norway (n = 6) rats were dark adapted and anesthetized with ketamine/xylazine. Visually evoked changes in the retinal blood flow (RBF) and functional response were measured simultaneously with an OCT+ERG system with 3-μm axial resolution in retinal tissue and 47-kHz image acquisition rate. Both single flash (10 and 200 ms) and flicker (10 Hz, 20% duty cycle, 1- and 2-second duration) stimuli were projected onto the retina with a custom visual stimulator, integrated into the OCT imaging probe. Total axial RBF was calculated from circular Doppler OCT scans by integrating over the arterial and venal flow. Temporary increase in the RBF was observed with the 10- and 200-ms continuous stimuli (∼1% and ∼4% maximum RBF change, respectively) and the 10-Hz flicker stimuli (∼8% for 1-second duration and ∼10% for 2-second duration). Doubling the flicker stimulus duration resulted in ∼25% increase in the RBF peak magnitude with no significant change in the peak latency. Single flash (200 ms) and flicker (10 Hz, 1 second) stimuli of the same illumination intensity and photon flux resulted in ∼2× larger peak RBF magnitude and ∼25% larger RBF peak latency for the flicker stimulus. Short, single flash and flicker stimuli evoked measureable RBF changes with larger RBF magnitude and peak latency observed for the flicker stimuli.
Visual fatigue modeling for stereoscopic video shot based on camera motion
NASA Astrophysics Data System (ADS)
Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing
2014-11-01
As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.
Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?
NASA Astrophysics Data System (ADS)
Schild, Jonas; Masuch, Maic
2012-03-01
This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.
NASA Technical Reports Server (NTRS)
Krauzlis, Rich; Stone, Leland; Null, Cynthia H. (Technical Monitor)
1998-01-01
When viewing objects, primates use a combination of saccadic and pursuit eye movements to stabilize the retinal image of the object of regard within the high-acuity region near the fovea. Although these movements involve widespread regions of the nervous system, they mix seamlessly in normal behavior. Saccades are discrete movements that quickly direct the eyes toward a visual target, thereby translating the image of the target from an eccentric retinal location to the fovea. In contrast, pursuit is a continuous movement that slowly rotates the eyes to compensate for the motion of the visual target, minimizing the blur that can compromise visual acuity. While other mammalian species can generate smooth optokinetic eye movements - which track the motion of the entire visual surround - only primates can smoothly pursue a single small element within a complex visual scene, regardless of the motion elsewhere on the retina. This ability likely reflects the greater ability of primates to segment the visual scene, to identify individual visual objects, and to select a target of interest.
Barrès, Victor; Lee, Jinyong
2014-01-01
How does the language system coordinate with our visual system to yield flexible integration of linguistic, perceptual, and world-knowledge information when we communicate about the world we perceive? Schema theory is a computational framework that allows the simulation of perceptuo-motor coordination programs on the basis of known brain operating principles such as cooperative computation and distributed processing. We present first its application to a model of language production, SemRep/TCG, which combines a semantic representation of visual scenes (SemRep) with Template Construction Grammar (TCG) as a means to generate verbal descriptions of a scene from its associated SemRep graph. SemRep/TCG combines the neurocomputational framework of schema theory with the representational format of construction grammar in a model linking eye-tracking data to visual scene descriptions. We then offer a conceptual extension of TCG to include language comprehension and address data on the role of both world knowledge and grammatical semantics in the comprehension performances of agrammatic aphasic patients. This extension introduces a distinction between heavy and light semantics. The TCG model of language comprehension offers a computational framework to quantitatively analyze the distributed dynamics of language processes, focusing on the interactions between grammatical, world knowledge, and visual information. In particular, it reveals interesting implications for the understanding of the various patterns of comprehension performances of agrammatic aphasics measured using sentence-picture matching tasks. This new step in the life cycle of the model serves as a basis for exploring the specific challenges that neurolinguistic computational modeling poses to the neuroinformatics community.
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Garcia, Gabriel J.; Corrales, Juan A.; Pomares, Jorge; Torres, Fernando
2009-01-01
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors. PMID:22303146
Olsson, Pontus; Nysjö, Fredrik; Hirsch, Jan-Michaél; Carlbom, Ingrid B
2013-11-01
Cranio-maxillofacial (CMF) surgery to restore normal skeletal anatomy in patients with serious trauma to the face can be both complex and time-consuming. But it is generally accepted that careful pre-operative planning leads to a better outcome with a higher degree of function and reduced morbidity in addition to reduced time in the operating room. However, today's surgery planning systems are primitive, relying mostly on the user's ability to plan complex tasks with a two-dimensional graphical interface. A system for planning the restoration of skeletal anatomy in facial trauma patients using a virtual model derived from patient-specific CT data. The system combines stereo visualization with six degrees-of-freedom, high-fidelity haptic feedback that enables analysis, planning, and preoperative testing of alternative solutions for restoring bone fragments to their proper positions. The stereo display provides accurate visual spatial perception, and the haptics system provides intuitive haptic feedback when bone fragments are in contact as well as six degrees-of-freedom attraction forces for precise bone fragment alignment. A senior surgeon without prior experience of the system received 45 min of system training. Following the training session, he completed a virtual reconstruction in 22 min of a complex mandibular fracture with an adequately reduced result. Preliminary testing with one surgeon indicates that our surgery planning system, which combines stereo visualization with sophisticated haptics, has the potential to become a powerful tool for CMF surgery planning. With little training, it allows a surgeon to complete a complex plan in a short amount of time.
Cortesi, Fabio; Musilová, Zuzana; Stieb, Sara M; Hart, Nathan S; Siebeck, Ulrike E; Cheney, Karen L; Salzburger, Walter; Marshall, N Justin
2016-08-15
Animals often change their habitat throughout ontogeny; yet, the triggers for habitat transitions and how these correlate with developmental changes - e.g. physiological, morphological and behavioural - remain largely unknown. Here, we investigated how ontogenetic changes in body coloration and of the visual system relate to habitat transitions in a coral reef fish. Adult dusky dottybacks, Pseudochromis fuscus, are aggressive mimics that change colour to imitate various fishes in their surroundings; however, little is known about the early life stages of this fish. Using a developmental time series in combination with the examination of wild-caught specimens, we revealed that dottybacks change colour twice during development: (i) nearly translucent cryptic pelagic larvae change to a grey camouflage coloration when settling on coral reefs; and (ii) juveniles change to mimic yellow- or brown-coloured fishes when reaching a size capable of consuming juvenile fish prey. Moreover, microspectrophotometric (MSP) and quantitative real-time PCR (qRT-PCR) experiments show developmental changes of the dottyback visual system, including the use of a novel adult-specific visual gene (RH2 opsin). This gene is likely to be co-expressed with other visual pigments to form broad spectral sensitivities that cover the medium-wavelength part of the visible spectrum. Surprisingly, the visual modifications precede changes in habitat and colour, possibly because dottybacks need to first acquire the appropriate visual performance before transitioning into novel life stages. © 2016. Published by The Company of Biologists Ltd.
Wang, Yu; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori; Yin, Xuanchun
2017-09-01
Endovascular surgery benefits patients because of its superior short convalescence and lack of damage to healthy tissue. However, such advantages require the operator to be equipped with dexterous skills for catheter manipulation without resulting in collateral damage. To achieve this goal, a training system is in high demand. A training system integrating a VR simulator and a haptic device has been developed within this context. The VR simulator is capable of providing visual cues which assist the novice for safe catheterization. In addition, the haptic device cooperates with VR simulator to apply sensations at the same time. The training system was tested by non-medical subjects over a five days training session. The performance was evaluated in terms of safety criteria and task completion time. The results demonstrate that operation safety is improved by 15.94% and task completion time is cut by 18.80 s maximum. Moreover, according to subjects' reflections, they are more confident in operation. The proposed training system constructs a comprehensive training environment that combines visualization and force sensation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
A prototype system based on visual interactive SDM called VGC
NASA Astrophysics Data System (ADS)
Jia, Zelu; Liu, Yaolin; Liu, Yanfang
2009-10-01
In many application domains, data is collected and referenced by its geo-spatial location. Spatial data mining, or the discovery of interesting patterns in such databases, is an important capability in the development of database systems. Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. For spatial data mining of large data sets to be effective, it is also important to include humans in the data exploration process and combine their flexibility, creativity, and general knowledge with the enormous storage capacity and computational power of today's computers. Visual spatial data mining applies human visual perception to the exploration of large data sets. Presenting data in an interactive, graphical form often fosters new insights, encouraging the information and validation of new hypotheses to the end of better problem-solving and gaining deeper domain knowledge. In this paper a visual interactive spatial data mining prototype system (visual geo-classify) based on VC++6.0 and MapObject2.0 are designed and developed, the basic algorithms of the spatial data mining is used decision tree and Bayesian networks, and data classify are used training and learning and the integration of the two to realize. The result indicates it's a practical and extensible visual interactive spatial data mining tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, D.B.; Grace, J.D.
1996-12-31
Petroleum system studies provide an ideal application for the combination of Geographic Information System (GIS) and multimedia technologies. GIS technology is used to build and maintain the spatial and tabular data within the study region. Spatial data may comprise the zones of active source rocks and potential reservoir facies. Similarly, tabular data include the attendant source rock parameters (e.g. pyroloysis results, organic carbon content) and field-level exploration and production histories for the basin. Once the spatial and tabular data base has been constructed, GIS technology is useful in finding favorable exploration trends, such as zones of high organic content, maturemore » source rocks in positions adjacent to sealed, high porosity reservoir facies. Multimedia technology provides powerful visualization tools for petroleum system studies. The components of petroleum system development, most importantly generation, migration and trap development typically span periods of tens to hundreds of millions of years. The ability to animate spatial data over time provides an insightful alternative for studying the development of processes which are only captured in {open_quotes}snapshots{close_quotes} by static maps. New multimedia-authoring software provides this temporal dimension. The ability to record this data on CD-ROMs and allow user- interactivity further leverages the combination of spatial data bases, tabular data bases and time-based animations. The example used for this study was the Bazhenov-Neocomian petroleum system of West Siberia.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, D.B.; Grace, J.D.
1996-01-01
Petroleum system studies provide an ideal application for the combination of Geographic Information System (GIS) and multimedia technologies. GIS technology is used to build and maintain the spatial and tabular data within the study region. Spatial data may comprise the zones of active source rocks and potential reservoir facies. Similarly, tabular data include the attendant source rock parameters (e.g. pyroloysis results, organic carbon content) and field-level exploration and production histories for the basin. Once the spatial and tabular data base has been constructed, GIS technology is useful in finding favorable exploration trends, such as zones of high organic content, maturemore » source rocks in positions adjacent to sealed, high porosity reservoir facies. Multimedia technology provides powerful visualization tools for petroleum system studies. The components of petroleum system development, most importantly generation, migration and trap development typically span periods of tens to hundreds of millions of years. The ability to animate spatial data over time provides an insightful alternative for studying the development of processes which are only captured in [open quotes]snapshots[close quotes] by static maps. New multimedia-authoring software provides this temporal dimension. The ability to record this data on CD-ROMs and allow user- interactivity further leverages the combination of spatial data bases, tabular data bases and time-based animations. The example used for this study was the Bazhenov-Neocomian petroleum system of West Siberia.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerr, J.; Jones, G.L.
1996-01-01
Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerr, J.; Jones, G.L.
1996-12-31
Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less
Advanced Lighting Controls for Reducing Energy use and Cost in DoD Installations
2013-05-01
OccuSwitch Wireless is a room-based lighting control system employing dimmable light sources, occupancy and daylight sensors , wireless interconnection...combination of wireless and wired control solution for building-wide networked system that maximizes the use of daylight while improving visual...architecture of Hybrid ILDC. Architecture: The system features wireless connectivity among sensors and actuators within a zone and exploits wired
Instrumentation in molecular imaging.
Wells, R Glenn
2016-12-01
In vivo molecular imaging is a challenging task and no single type of imaging system provides an ideal solution. Nuclear medicine techniques like SPECT and PET provide excellent sensitivity but have poor spatial resolution. Optical imaging has excellent sensitivity and spatial resolution, but light photons interact strongly with tissues and so only small animals and targets near the surface can be accurately visualized. CT and MRI have exquisite spatial resolution, but greatly reduced sensitivity. To overcome the limitations of individual modalities, molecular imaging systems often combine individual cameras together, for example, merging nuclear medicine cameras with CT or MRI to allow the visualization of molecular processes with both high sensitivity and high spatial resolution.
In-situ characterization of wildland fire behavior
Bret Butler; D. Jimenez; J. Forthofer; Paul Sopko; K. Shannon; Jim Reardon
2010-01-01
A system consisting of two enclosures has been developed to characterize wildand fire behavior: The first enclosure is a sensor/data logger combination that measures and records convective/radiant energy released by the fire. The second is a digital video camera housed in a fire proof enclosure that records visual images of fire behavior. Together this system provides...
A Physiological Approach to Prolonged Recovery From Sport-Related Concussion.
Leddy, John; Baker, John G; Haider, Mohammad Nadir; Hinds, Andrea; Willer, Barry
2017-03-01
Management of the athlete with postconcussion syndrome (PCS) is challenging because of the nonspecificity of PCS symptoms. Ongoing symptoms reflect prolonged concussion pathophysiology or conditions such as migraine headaches, depression or anxiety, chronic pain, cervical injury, visual dysfunction, vestibular dysfunction, or some combination of these. In this paper, we focus on the physiological signs of concussion to help narrow the differential diagnosis of PCS in athletes. The physiological effects of exercise on concussion are especially important for athletes. Some athletes with PCS have exercise intolerance that may result from altered control of cerebral blood flow. Systematic evaluation of exercise tolerance combined with a physical examination of the neurologic, visual, cervical, and vestibular systems can in many cases identify one or more treatable postconcussion disorders.
Photoacoustic imaging of lymphatic pumping
NASA Astrophysics Data System (ADS)
Forbrich, Alex; Heinmiller, Andrew; Zemp, Roger J.
2017-10-01
The lymphatic system is responsible for fluid homeostasis and immune cell trafficking and has been implicated in several diseases, including obesity, diabetes, and cancer metastasis. Despite its importance, the lack of suitable in vivo imaging techniques has hampered our understanding of the lymphatic system. This is, in part, due to the limited contrast of lymphatic fluids and structures. Photoacoustic imaging, in combination with optically absorbing dyes or nanoparticles, has great potential for noninvasively visualizing the lymphatic vessels deep in tissues. Multispectral photoacoustic imaging is capable of separating the components; however, the slow wavelength switching speed of most laser systems is inadequate for imaging lymphatic pumping without motion artifacts being introduced into the processed images. We investigate two approaches for visualizing lymphatic processes in vivo. First, single-wavelength differential photoacoustic imaging is used to visualize lymphatic pumping in the hindlimb of a mouse in real time. Second, a fast-switching multiwavelength photoacoustic imaging system was used to assess the propulsion profile of dyes through the lymphatics in real time. These approaches may have profound impacts in noninvasively characterizing and investigating the lymphatic system.
Divergent receiver responses to components of multimodal signals in two foot-flagging frog species.
Preininger, Doris; Boeckle, Markus; Sztatecsny, Marc; Hödl, Walter
2013-01-01
Multimodal communication of acoustic and visual signals serves a vital role in the mating system of anuran amphibians. To understand signal evolution and function in multimodal signal design it is critical to test receiver responses to unimodal signal components versus multimodal composite signals. We investigated two anuran species displaying a conspicuous foot-flagging behavior in addition to or in combination with advertisement calls while announcing their signaling sites to conspecifics. To investigate the conspicuousness of the foot-flagging signals, we measured and compared spectral reflectance of foot webbings of Micrixalus saxicola and Staurois parvus using a spectrophotometer. We performed behavioral field experiments using a model frog including an extendable leg combined with acoustic playbacks to test receiver responses to acoustic, visual and combined audio-visual stimuli. Our results indicated that the foot webbings of S. parvus achieved a 13 times higher contrast against their visual background than feet of M. saxicola. The main response to all experimental stimuli in S. parvus was foot flagging, whereas M. saxicola responded primarily with calls but never foot flagged. Together these across-species differences suggest that in S. parvus foot-flagging behavior is applied as a salient and frequently used communicative signal during agonistic behavior, whereas we propose it constitutes an evolutionary nascent state in ritualization of the current fighting behavior in M. saxicola.
Compact sensitive instrument for direct ultrasonic visualization of defects.
Bar-Cohen, Y; Ben-Joseph, B; Harnik, E
1978-12-01
A simple ultrasonic imaging cell based on the confocal combination of a plano-concave lens and a concave spherical mirror is described. When used in conjunction with a stroboscopic schlieren visualization system, it has the main attributes of a practical nondestructive testing instrument: it is compact, relatively inexpensive, and simple to operate; its sensitivity is fair, resolution and fidelity are good; it has a fairly large field of view and a test piece can be readily scanned. The scope of its applicability is described and discussed.
2017-08-30
as being three-fold: 1) a measurement of the integrity of both the central and peripheral visual processing centers; 2) an indicator of detail...visual assessment task 12 integral to the Army’s Class 1 Flight Physical (Ginsburg, 1981 and 1984; Bachman & Behar, 1986). During a Class 1 flight...systems. Meta-analysis has been defined as the statistical analysis of a collection of analytical results for the purpose of integrating the findings
NASA Astrophysics Data System (ADS)
Burberry, C. M.
2012-12-01
It is a well-known phenomenon that deformation style varies in space; both along the strike of a deformed belt and along the strike of individual structures within that belt. This variation in deformation style is traditionally visualized with a series of closely spaced 2D cross-sections. However, the use of 2D section lines implies plane strain along those lines, and the true 3D nature of the deformation is not necessarily captured. By using a combination of remotely sensed data, analog modeling of field datasets and this remote data, and numerical and digital visualization of the finished model, a 3D understanding and restoration of the deformation style within the region can be achieved. The workflow used for this study begins by considering the variation in deformation style which can be observed from satellite images and combining this data with traditional field data, in order to understand the deformation in the region under consideration. The conceptual model developed at this stage is then modeled using a sand and silicone modeling system, where the kinematics and dynamics of the deformation processes can be examined. A series of closely-spaced cross-sections, as well as 3D images of the deformation, are created from the analog model, and input into a digital visualization and modeling system for restoration. In this fashion, a valid 3D model is created where the internal structure of the deformed system can be visualized and mined for information. The region used in the study is the Sawtooth Range, Montana. The region forms part of the Montana Disturbed Belt in the Front Ranges of the Rocky Mountains, along strike from the Alberta Syncline in the Canadian Rocky Mountains. Interpretation of satellite data indicates that the deformation front structures include both folds and thrust structures. The thrust structures vary from hinterland-verging triangle zones to foreland-verging imbricate thrusts along strike, and the folds also vary in geometry along strike. The analog models, constrained by data from exploration wells, indicate that this change in geometry is related to a change in mechanical stratigraphy along the strike of the belt. Results from the kinematic and dynamic analysis of the digital model will also be presented. Additional implications of such a workflow and visualization system include the possibility of creating and viewing multiple cross-sections, including sections created at oblique angles to the original model. This allows the analysis of the non-plane strain component of the models and thus a more complete analysis, understanding and visualization of the deformed region. This workflow and visualization system is applicable to any region where traditional field methods must be coupled with remote data, intensely processed depth data, or analog modeling systems in order to generate valid geologic or geophsyical models.
Floating aerial 3D display based on the freeform-mirror and the improved integral imaging system
NASA Astrophysics Data System (ADS)
Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Yang, Shenwu; Liu, Boyang; Chen, Duo; Yan, Binbin; Yu, Chongxiu
2018-09-01
A floating aerial three-dimensional (3D) display based on the freeform-mirror and the improved integral imaging system is demonstrated. In the traditional integral imaging (II), the distortion originating from lens aberration warps elemental images and degrades the visual effect severely. To correct the distortion of the observed pixels and to improve the image quality, a directional diffuser screen (DDS) is introduced. However, the improved integral imaging system can hardly present realistic images with the large off-screen depth, which limits floating aerial visual experience. To display the 3D image in the free space, the off-axis reflection system with the freeform-mirror is designed. By combining the improved II and the designed freeform optical element, the floating aerial 3D image is presented.
NASA Astrophysics Data System (ADS)
Harris, E.
Planning, Implementation and Optimization of Future Space Missions using an Immersive Visualization Environment (IVE) Machine E. N. Harris, Lockheed Martin Space Systems, Denver, CO and George.W. Morgenthaler, U. of Colorado at Boulder History: A team of 3-D engineering visualization experts at the Lockheed Martin Space Systems Company have developed innovative virtual prototyping simulation solutions for ground processing and real-time visualization of design and planning of aerospace missions over the past 6 years. At the University of Colorado, a team of 3-D visualization experts are developing the science of 3-D visualization and immersive visualization at the newly founded BP Center for Visualization, which began operations in October, 2001. (See IAF/IAA-01-13.2.09, "The Use of 3-D Immersive Visualization Environments (IVEs) to Plan Space Missions," G. A. Dorn and G. W. Morgenthaler.) Progressing from Today's 3-D Engineering Simulations to Tomorrow's 3-D IVE Mission Planning, Simulation and Optimization Techniques: 3-D (IVEs) and visualization simulation tools can be combined for efficient planning and design engineering of future aerospace exploration and commercial missions. This technology is currently being developed and will be demonstrated by Lockheed Martin in the (IVE) at the BP Center using virtual simulation for clearance checks, collision detection, ergonomics and reach-ability analyses to develop fabrication and processing flows for spacecraft and launch vehicle ground support operations and to optimize mission architecture and vehicle design subject to realistic constraints. Demonstrations: Immediate aerospace applications to be demonstrated include developing streamlined processing flows for Reusable Space Transportation Systems and Atlas Launch Vehicle operations and Mars Polar Lander visual work instructions. Long-range goals include future international human and robotic space exploration missions such as the development of a Mars Reconnaissance Orbiter and Lunar Base construction scenarios. Innovative solutions utilizing Immersive Visualization provide the key to streamlining the mission planning and optimizing engineering design phases of future aerospace missions.
Nouri, Dorra; Lucas, Yves; Treuillet, Sylvie
2016-12-01
Hyperspectral imaging is an emerging technology recently introduced in medical applications inasmuch as it provides a powerful tool for noninvasive tissue characterization. In this context, a new system was designed to be easily integrated in the operating room in order to detect anatomical tissues hardly noticed by the surgeon's naked eye. Our LCTF-based spectral imaging system is operative over visible, near- and middle-infrared spectral ranges (400-1700 nm). It is dedicated to enhance critical biological tissues such as the ureter and the facial nerve. We aim to find the best three relevant bands to create a RGB image to display during the intervention with maximal contrast between the target tissue and its surroundings. A comparative study is carried out between band selection methods and band transformation methods. Combined band selection methods are proposed. All methods are compared using different evaluation criteria. Experimental results show that the proposed combined band selection methods provide the best performance with rich information, high tissue separability and short computational time. These methods yield a significant discrimination between biological tissues. We developed a hyperspectral imaging system in order to enhance some biological tissue visualization. The proposed methods provided an acceptable trade-off between the evaluation criteria especially in SWIR spectral band that outperforms the naked eye's capacities.
Learning Human Actions by Combining Global Dynamics and Local Appearance.
Luo, Guan; Yang, Shuang; Tian, Guodong; Yuan, Chunfeng; Hu, Weiming; Maybank, Stephen J
2014-12-01
In this paper, we address the problem of human action recognition through combining global temporal dynamics and local visual spatio-temporal appearance features. For this purpose, in the global temporal dimension, we propose to model the motion dynamics with robust linear dynamical systems (LDSs) and use the model parameters as motion descriptors. Since LDSs live in a non-Euclidean space and the descriptors are in non-vector form, we propose a shift invariant subspace angles based distance to measure the similarity between LDSs. In the local visual dimension, we construct curved spatio-temporal cuboids along the trajectories of densely sampled feature points and describe them using histograms of oriented gradients (HOG). The distance between motion sequences is computed with the Chi-Squared histogram distance in the bag-of-words framework. Finally we perform classification using the maximum margin distance learning method by combining the global dynamic distances and the local visual distances. We evaluate our approach for action recognition on five short clips data sets, namely Weizmann, KTH, UCF sports, Hollywood2 and UCF50, as well as three long continuous data sets, namely VIRAT, ADL and CRIM13. We show competitive results as compared with current state-of-the-art methods.
Models of Speed Discrimination
NASA Technical Reports Server (NTRS)
1997-01-01
The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.
Interactive visualization and analysis of multimodal datasets for surgical applications.
Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James
2012-12-01
Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.
A Flexible Approach for the Statistical Visualization of Ensemble Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, K.; Wilson, A.; Bremer, P.
2009-09-29
Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methodsmore » that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.« less
Multisensory Integration and Internal Models for Sensing Gravity Effects in Primates
Lacquaniti, Francesco; La Scaleia, Barbara; Maffei, Vincenzo
2014-01-01
Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects. PMID:25061610
Multisensory integration and internal models for sensing gravity effects in primates.
Lacquaniti, Francesco; Bosco, Gianfranco; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka
2014-01-01
Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
NASA Astrophysics Data System (ADS)
Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing
2016-06-01
Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.
NASA Technical Reports Server (NTRS)
Hosman, R. J. A. W.; Vandervaart, J. C.
1984-01-01
An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.
A component-based software environment for visualizing large macromolecular assemblies.
Sanner, Michel F
2005-03-01
The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.
Foggy perception slows us down.
Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H
2012-10-30
Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog-that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system.DOI:http://dx.doi.org/10.7554/eLife.00031.001.
Wind Tunnel Visualization of the Flow Over a Full-Scale F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Lanser, Wendy R.; Botha, Gavin J.; James, Kevin D.; Crowder, James P.; Schmitz, Fredric H. (Technical Monitor)
1994-01-01
The proposed paper presents flow visualization performed during experiments conducted on a full-scale F/A-18 aircraft in the 80- by 120-Foot Wind-Tunnel at NASA Ames Research Center. This investigation used both surface and off-surface flow visualization techniques to examine the flow field on the forebody, canopy, leading edge extensions (LEXs), and wings. The various techniques used to visualize the flow field were fluorescent tufts, flow cones treated with reflective material, smoke in combination with a laser light sheet, and a video imaging system. The flow visualization experiments were conducted over an angle of attack range from 20deg to 45deg and over a sideslip range from -10deg to 10deg. The results show regions of attached and separated flow on the forebody, canopy, and wings. Additionally, the vortical flow is clearly visible over the leading-edge extensions, canopy, and wings.
Görg, Carsten; Liu, Zhicheng; Kihm, Jaeyeon; Choo, Jaegul; Park, Haesun; Stasko, John
2013-10-01
Investigators across many disciplines and organizations must sift through large collections of text documents to understand and piece together information. Whether they are fighting crime, curing diseases, deciding what car to buy, or researching a new field, inevitably investigators will encounter text documents. Taking a visual analytics approach, we integrate multiple text analysis algorithms with a suite of interactive visualizations to provide a flexible and powerful environment that allows analysts to explore collections of documents while sensemaking. Our particular focus is on the process of integrating automated analyses with interactive visualizations in a smooth and fluid manner. We illustrate this integration through two example scenarios: an academic researcher examining InfoVis and VAST conference papers and a consumer exploring car reviews while pondering a purchase decision. Finally, we provide lessons learned toward the design and implementation of visual analytics systems for document exploration and understanding.
Research and Construction Lunar Stereoscopic Visualization System Based on Chang'E Data
NASA Astrophysics Data System (ADS)
Gao, Xingye; Zeng, Xingguo; Zhang, Guihua; Zuo, Wei; Li, ChunLai
2017-04-01
With lunar exploration activities carried by Chang'E-1, Chang'E-2 and Chang'E-3 lunar probe, a large amount of lunar data has been obtained, including topographical and image data covering the whole moon, as well as the panoramic image data of the spot close to the landing point of Chang'E-3. In this paper, we constructed immersive virtual moon system based on acquired lunar exploration data by using advanced stereoscopic visualization technology, which will help scholars to carry out research on lunar topography, assist the further exploration of lunar science, and implement the facilitation of lunar science outreach to the public. In this paper, we focus on the building of lunar stereoscopic visualization system with the combination of software and hardware by using binocular stereoscopic display technology, real-time rendering algorithm for massive terrain data, and building virtual scene technology based on panorama, to achieve an immersive virtual tour of the whole moon and local moonscape of Chang'E-3 landing point.
Integrated network analysis and effective tools in plant systems biology
Fukushima, Atsushi; Kanaya, Shigehiko; Nishida, Kozo
2014-01-01
One of the ultimate goals in plant systems biology is to elucidate the genotype-phenotype relationship in plant cellular systems. Integrated network analysis that combines omics data with mathematical models has received particular attention. Here we focus on the latest cutting-edge computational advances that facilitate their combination. We highlight (1) network visualization tools, (2) pathway analyses, (3) genome-scale metabolic reconstruction, and (4) the integration of high-throughput experimental data and mathematical models. Multi-omics data that contain the genome, transcriptome, proteome, and metabolome and mathematical models are expected to integrate and expand our knowledge of complex plant metabolisms. PMID:25408696
Cho, Nam Hyun; Jang, Jeong Hun; Jung, Woonggyu; Kim, Jeehyun
2014-01-01
We developed an augmented-reality system that combines optical coherence tomography (OCT) with a surgical microscope. By sharing the common optical path in the microscope and OCT, we could simultaneously acquire OCT and microscope views. The system was tested to identify the middle-ear and inner-ear microstructures of a mouse. Considering the probability of clinical application including otorhinolaryngology, diseases such as middle-ear effusion were visualized using in vivo mouse and OCT images simultaneously acquired through the eyepiece of the surgical microscope during surgical manipulation using the proposed system. This system is expected to realize a new practical area of OCT application. PMID:24787787
Performance enhancement for audio-visual speaker identification using dynamic facial muscle model.
Asadpour, Vahid; Towhidkhah, Farzad; Homayounpour, Mohammad Mehdi
2006-10-01
Science of human identification using physiological characteristics or biometry has been of great concern in security systems. However, robust multimodal identification systems based on audio-visual information has not been thoroughly investigated yet. Therefore, the aim of this work to propose a model-based feature extraction method which employs physiological characteristics of facial muscles producing lip movements. This approach adopts the intrinsic properties of muscles such as viscosity, elasticity, and mass which are extracted from the dynamic lip model. These parameters are exclusively dependent on the neuro-muscular properties of speaker; consequently, imitation of valid speakers could be reduced to a large extent. These parameters are applied to a hidden Markov model (HMM) audio-visual identification system. In this work, a combination of audio and video features has been employed by adopting a multistream pseudo-synchronized HMM training method. Noise robust audio features such as Mel-frequency cepstral coefficients (MFCC), spectral subtraction (SS), and relative spectra perceptual linear prediction (J-RASTA-PLP) have been used to evaluate the performance of the multimodal system once efficient audio feature extraction methods have been utilized. The superior performance of the proposed system is demonstrated on a large multispeaker database of continuously spoken digits, along with a sentence that is phonetically rich. To evaluate the robustness of algorithms, some experiments were performed on genetically identical twins. Furthermore, changes in speaker voice were simulated with drug inhalation tests. In 3 dB signal to noise ratio (SNR), the dynamic muscle model improved the identification rate of the audio-visual system from 91 to 98%. Results on identical twins revealed that there was an apparent improvement on the performance for the dynamic muscle model-based system, in which the identification rate of the audio-visual system was enhanced from 87 to 96%.
Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J
2013-12-01
Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.
Utterance independent bimodal emotion recognition in spontaneous communication
NASA Astrophysics Data System (ADS)
Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng
2011-12-01
Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.
Haider, Bilal; Krause, Matthew R.; Duque, Alvaro; Yu, Yuguo; Touryan, Jonathan; Mazer, James A.; McCormick, David A.
2011-01-01
SUMMARY During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RSC) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RSC neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses. PMID:20152117
Neuronal connectome of a sensory-motor circuit for visual navigation
Randel, Nadine; Asadulina, Albina; Bezares-Calderón, Luis A; Verasztó, Csaba; Williams, Elizabeth A; Conzelmann, Markus; Shahidi, Réza; Jékely, Gáspár
2014-01-01
Animals use spatial differences in environmental light levels for visual navigation; however, how light inputs are translated into coordinated motor outputs remains poorly understood. Here we reconstruct the neuronal connectome of a four-eye visual circuit in the larva of the annelid Platynereis using serial-section transmission electron microscopy. In this 71-neuron circuit, photoreceptors connect via three layers of interneurons to motorneurons, which innervate trunk muscles. By combining eye ablations with behavioral experiments, we show that the circuit compares light on either side of the body and stimulates body bending upon left-right light imbalance during visual phototaxis. We also identified an interneuron motif that enhances sensitivity to different light intensity contrasts. The Platynereis eye circuit has the hallmarks of a visual system, including spatial light detection and contrast modulation, illustrating how image-forming eyes may have evolved via intermediate stages contrasting only a light and a dark field during a simple visual task. DOI: http://dx.doi.org/10.7554/eLife.02730.001 PMID:24867217
Reifman, Jaques; Kumar, Kamal; Khitrov, Maxim Y; Liu, Jianbo; Ramakrishnan, Sridhar
2018-07-01
The psychomotor vigilance task (PVT) has been widely used to assess the effects of sleep deprivation on human neurobehavioral performance. To facilitate research in this field, we previously developed the PC-PVT, a freely available software system analogous to the "gold-standard" PVT-192 that, in addition to allowing for simple visual reaction time (RT) tests, also allows for near real-time PVT analysis, prediction, and visualization in a personal computer (PC). Here we present the PC-PVT 2.0 for Windows 10 operating system, which has the capability to couple PVT tests of a study protocol with the study's sleep/wake and caffeine schedules, and make real-time individualized predictions of PVT performance for such schedules. We characterized the accuracy and precision of the software in measuring RT, using 44 distinct combinations of PC hardware system configurations. We found that 15 system configurations measured RTs with an average delay of less than 10 ms, an error comparable to that of the PVT-192. To achieve such small delays, the system configuration should always use a gaming mouse as the means to respond to visual stimuli. We recommend using a discrete graphical processing unit for desktop PCs and an external monitor for laptop PCs. This update integrates a study's sleep/wake and caffeine schedules with the testing software, facilitating testing and outcome visualization, and provides near-real-time individualized PVT predictions for any sleep-loss condition considering caffeine effects. The software, with its enhanced PVT analysis, visualization, and prediction capabilities, can be freely downloaded from https://pcpvt.bhsai.org. Published by Elsevier B.V.
Effects of color combination and ambient illumination on visual perception time with TFT-LCD.
Lin, Chin-Chiuan; Huang, Kuo-Chen
2009-10-01
An empirical study was carried out to examine the effects of color combination and ambient illumination on visual perception time using TFT-LCD. The effect of color combination was broken down into two subfactors, luminance contrast ratio and chromaticity contrast. Analysis indicated that the luminance contrast ratio and ambient illumination had significant, though small effects on visual perception. Visual perception time was better at high luminance contrast ratio than at low luminance contrast ratio. Visual perception time under normal ambient illumination was better than at other ambient illumination levels, although the stimulus color had a confounding effect on visual perception time. In general, visual perception time was better for the primary colors than the middle-point colors. Based on the results, normal ambient illumination level and high luminance contrast ratio seemed to be the optimal choice for design of workplace with video display terminals TFT-LCD.
Conjunctive Coding of Complex Object Features
Erez, Jonathan; Cusack, Rhodri; Kendall, William; Barense, Morgan D.
2016-01-01
Critical to perceiving an object is the ability to bind its constituent features into a cohesive representation, yet the manner by which the visual system integrates object features to yield a unified percept remains unknown. Here, we present a novel application of multivoxel pattern analysis of neuroimaging data that allows a direct investigation of whether neural representations integrate object features into a whole that is different from the sum of its parts. We found that patterns of activity throughout the ventral visual stream (VVS), extending anteriorly into the perirhinal cortex (PRC), discriminated between the same features combined into different objects. Despite this sensitivity to the unique conjunctions of features comprising objects, activity in regions of the VVS, again extending into the PRC, was invariant to the viewpoints from which the conjunctions were presented. These results suggest that the manner in which our visual system processes complex objects depends on the explicit coding of the conjunctions of features comprising them. PMID:25921583
Advanced Visualization of Experimental Data in Real Time Using LiveView3D
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2006-01-01
LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.
How do plants see the world? - UV imaging with a TiO2 nanowire array by artificial photosynthesis.
Kang, Ji-Hoon; Leportier, Thibault; Park, Min-Chul; Han, Sung Gyu; Song, Jin-Dong; Ju, Hyunsu; Hwang, Yun Jeong; Ju, Byeong-Kwon; Poon, Ting-Chung
2018-05-10
The concept of plant vision refers to the fact that plants are receptive to their visual environment, although the mechanism involved is quite distinct from the human visual system. The mechanism in plants is not well understood and has yet to be fully investigated. In this work, we have exploited the properties of TiO2 nanowires as a UV sensor to simulate the phenomenon of photosynthesis in order to come one step closer to understanding how plants see the world. To the best of our knowledge, this study is the first approach to emulate and depict plant vision. We have emulated the visual map perceived by plants with a single-pixel imaging system combined with a mechanical scanner. The image acquisition has been demonstrated for several electrolyte environments, in both transmissive and reflective configurations, in order to explore the different conditions in which plants perceive light.
Urban watersheds are notoriously difficult to model due to their complex, small-scale combinations of landscape and land use characteristics including impervious surfaces that ultimately affect the hydrologic system. We utilized EPA’s Visualizing Ecosystem Land Management A...
PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features
Zhao, Ji; Guo, Yue; He, Wenhao; Yuan, Kui
2018-01-01
To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactness of a 3D spatial line, Plücker coordinates and orthonormal representation for the line are employed. To tightly and efficiently fuse the information from inertial measurement units (IMUs) and visual sensors, we optimize the states by minimizing a cost function which combines the pre-integrated IMU error term together with the point and line re-projection error terms in a sliding window optimization framework. The experiments evaluated on public datasets demonstrate that the PL-VIO method that combines point and line features outperforms several state-of-the-art VIO systems which use point features only. PMID:29642648
Takechi, Hiroki; Kawamura, Hinata
2017-01-01
Formation of a functional neuronal network requires not only precise target recognition, but also stabilization of axonal contacts within their appropriate synaptic layers. Little is known about the molecular mechanisms underlying the stabilization of axonal connections after reaching their specifically targeted layers. Here, we show that two receptor protein tyrosine phosphatases (RPTPs), LAR and Ptp69D, act redundantly in photoreceptor afferents to stabilize axonal connections to the specific layers of the Drosophila visual system. Surprisingly, by combining loss-of-function and genetic rescue experiments, we found that the depth of the final layer of stable termination relied primarily on the cumulative amount of LAR and Ptp69D cytoplasmic activity, while specific features of their ectodomains contribute to the choice between two synaptic layers, M3 and M6, in the medulla. These data demonstrate how the combination of overlapping downstream but diversified upstream properties of two RPTPs can shape layer-specific wiring. PMID:29116043
NASA Astrophysics Data System (ADS)
Anitha Devi, M. D.; ShivaKumar, K. B.
2017-08-01
Online payment eco system is the main target especially for cyber frauds. Therefore end to end encryption is very much needed in order to maintain the integrity of secret information related to transactions carried online. With access to payment related sensitive information, which enables lot of money transactions every day, the payment infrastructure is a major target for hackers. The proposed system highlights, an ideal approach for secure online transaction for fund transfer with a unique combination of visual cryptography and Haar based discrete wavelet transform steganography technique. This combination of data hiding technique reduces the amount of information shared between consumer and online merchant needed for successful online transaction along with providing enhanced security to customer’s account details and thereby increasing customer’s confidence preventing “Identity theft” and “Phishing”. To evaluate the effectiveness of proposed algorithm Root mean square error, Peak signal to noise ratio have been used as evaluation parameters
Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.
Stein, Manuel; Janetzko, Halldor; Lamprecht, Andreas; Breitkreutz, Thorsten; Zimmermann, Philipp; Goldlucke, Bastian; Schreck, Tobias; Andrienko, Gennady; Grossniklaus, Michael; Keim, Daniel A
2018-01-01
Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.
A generalized 3D framework for visualization of planetary data.
NASA Astrophysics Data System (ADS)
Larsen, K. W.; De Wolfe, A. W.; Putnam, B.; Lindholm, D. M.; Nguyen, D.
2016-12-01
As the volume and variety of data returned from planetary exploration missions continues to expand, new tools and technologies are needed to explore the data and answer questions about the formation and evolution of the solar system. We have developed a 3D visualization framework that enables the exploration of planetary data from multiple instruments on the MAVEN mission to Mars. This framework not only provides the opportunity for cross-instrument visualization, but is extended to include model data as well, helping to bridge the gap between theory and observation. This is made possible through the use of new web technologies, namely LATIS, a data server that can stream data and spacecraft ephemerides to a web browser, and Cesium, a Javascript library for 3D globes. The common visualization framework we have developed is flexible and modular so that it can easily be adapted for additional missions. In addition to demonstrating the combined data and modeling capabilities of the system for the MAVEN mission, we will display the first ever near real-time `QuickLook', interactive, 4D data visualization for the Magnetospheric Multiscale Mission (MMS). In this application, data from all four spacecraft can be manipulated and visualized as soon as the data is ingested into the MMS Science Data Center, less than one day after collection.
Multiple-region directed functional connectivity based on phase delays.
Goelman, Gadi; Dan, Rotem
2017-03-01
Network analysis is increasingly advancing the field of neuroimaging. Neural networks are generally constructed from pairwise interactions with an assumption of linear relations between them. Here, a high-order statistical framework to calculate directed functional connectivity among multiple regions, using wavelet analysis and spectral coherence has been presented. The mathematical expression for 4 regions was derived and used to characterize a quartet of regions as a linear, combined (nonlinear), or disconnected network. Phase delays between regions were used to obtain network's temporal hierarchy and directionality. The validity of the mathematical derivation along with the effects of coupling strength and noise on its outcomes were studied by computer simulations of the Kuramoto model. The simulations demonstrated correct directionality for a large range of coupling strength and low sensitivity to Gaussian noise compared with pairwise coherences. The analysis was applied to resting-state fMRI data of 40 healthy young subjects to characterize the ventral visual system, motor system and default mode network (DMN). It was shown that the ventral visual system was predominantly composed of linear networks while the motor system and the DMN were composed of combined (nonlinear) networks. The ventral visual system exhibits its known temporal hierarchy, the motor system exhibits center ↔ out hierarchy and the DMN has dorsal ↔ ventral and anterior ↔ posterior organizations. The analysis can be applied in different disciplines such as seismology, or economy and in a variety of brain data including stimulus-driven fMRI, electrophysiology, EEG, and MEG, thus open new horizons in brain research. Hum Brain Mapp 38:1374-1386, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The Next Generation of Ground Operations Command and Control; Scripting in C Sharp and Visual Basic
NASA Technical Reports Server (NTRS)
Ritter, George; Pedoto, Ramon
2010-01-01
This slide presentation reviews the use of scripting languages in Ground Operations Command and Control. It describes the use of scripting languages in a historical context, the advantages and disadvantages of scripts. It describes the Enhanced and Redesigned Scripting (ERS) language, that was designed to combine the features of a scripting language and the graphical and IDE richness of a programming language with the utility of scripting languages. ERS uses the Microsoft Visual Studio programming environment and offers custom controls that enable an ERS developer to extend the Visual Basic and C sharp language interface with the Payload Operations Integration Center (POIC) telemetry and command system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, Kristin C; Brunhart-Lupo, Nicholas J; Bush, Brian W
We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically soundmore » esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.« less
Toda, Ikuko; Ide, Takeshi; Fukumoto, Teruki; Ichihashi, Yoshiyuki; Tsubota, Kazuo
2014-03-01
To evaluate the possible advantages of combination therapy with diquafosol tetrasodium and sodium hyaluronate for dry eye after laser in situ keratomileusis (LASIK). Prospective randomized comparative trial. A total of 206 eyes of 105 patients who underwent LASIK were enrolled in this study. Patients were randomly assigned to 1 of 4 treatment groups according to the postoperative treatment: artificial tears, sodium hyaluronate, diquafosol tetrasodium, and a combination of hyaluronate and diquafosol. Questionnaire responses reflecting subjective dry eye symptoms, uncorrected and corrected visual acuity, functional visual acuity, manifest refraction, tear break-up time, fluorescein corneal staining, Schirmer test, and corneal sensitivity were examined before and 1 week and 1 month after LASIK. Distance uncorrected visual acuity was significantly better in the combination group than in the hyaluronate group 1 week and 1 month after LASIK. Near uncorrected visual acuity was significantly better in the combination group than in the artificial tear and diquafosol groups 1 week and 1 month after LASIK. Distance functional visual acuity improved significantly only in the combination group 1 month after LASIK. The Schirmer value in the combination group was significantly higher than that in the hyaluronate group at 1 month after LASIK. Subjective dry eye symptoms in the combination group improved significantly compared with those in the other groups 1 week after surgery. Our results suggest that hyaluronate and diquafosol combination therapy is beneficial for early stabilization of visual performance and improvement of subjective dry eye symptoms in patients after LASIK. Copyright © 2014 Elsevier Inc. All rights reserved.
Helicopter Flight Simulation Motion Platform Requirements
NASA Technical Reports Server (NTRS)
Schroeder, Jeffery Allyn
1999-01-01
To determine motion fidelity requirements, a series of piloted simulations was performed. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositioning. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.
Localized direction selective responses in the dendrites of visual interneurons of the fly
2010-01-01
Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983
Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.
Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James
2016-03-21
Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.
Intelligent Data Visualization for Cross-Checking Spacecraft System Diagnosis
NASA Technical Reports Server (NTRS)
Ong, James C.; Remolina, Emilio; Breeden, David; Stroozas, Brett A.; Mohammed, John L.
2012-01-01
Any reasoning system is fallible, so crew members and flight controllers must be able to cross-check automated diagnoses of spacecraft or habitat problems by considering alternate diagnoses and analyzing related evidence. Cross-checking improves diagnostic accuracy because people can apply information processing heuristics, pattern recognition techniques, and reasoning methods that the automated diagnostic system may not possess. Over time, cross-checking also enables crew members to become comfortable with how the diagnostic reasoning system performs, so the system can earn the crew s trust. We developed intelligent data visualization software that helps users cross-check automated diagnoses of system faults more effectively. The user interface displays scrollable arrays of timelines and time-series graphs, which are tightly integrated with an interactive, color-coded system schematic to show important spatial-temporal data patterns. Signal processing and rule-based diagnostic reasoning automatically identify alternate hypotheses and data patterns that support or rebut the original and alternate diagnoses. A color-coded matrix display summarizes the supporting or rebutting evidence for each diagnosis, and a drill-down capability enables crew members to quickly view graphs and timelines of the underlying data. This system demonstrates that modest amounts of diagnostic reasoning, combined with interactive, information-dense data visualizations, can accelerate system diagnosis and cross-checking.
A low-cost, portable, micro-controlled device for multi-channel LED visual stimulation.
Pinto, Marcos Antonio da Silva; de Souza, John Kennedy Schettino; Baron, Jerome; Tierra-Criollo, Carlos Julio
2011-04-15
Light emitting diodes (LEDs) are extensively used as light sources to investigate visual and visually related function and dysfunction. Here, we describe the design of a compact, low-cost, stand-alone LED-based system that enables the configuration, storage and presentation of elaborate visual stimulation paradigms. The core functionality of this system is provided by a microcontroller whose ultra-low power consumption makes it well suited for long lasting battery applications. The effective use of hardware resources is managed by multi-layered architecture software that provides an intuitive and user-friendly interface. In the configuration mode, different stimulation sequences can be created and memorized for ten channels, independently. LED-driving current output can be set either as continuous or pulse modulated, up to 500 Hz, by duty cycle adjustments. In run mode, multiple-channel stimulus sequences are automatically applied according to the pre-programmed protocol. Steady state visual evoked potentials were successfully recorded in five subjects with no visible electromagnetic interferences from the stimulator, demonstrating the efficacy of combining our prototyped equipment with electrophysiological techniques. Finally, we discuss a number of possible improvements for future development of our project. Copyright © 2011 Elsevier B.V. All rights reserved.
De Freitas, Julian; Alvarez, George A
2018-05-28
To what extent are people's moral judgments susceptible to subtle factors of which they are unaware? Here we show that we can change people's moral judgments outside of their awareness by subtly biasing perceived causality. Specifically, we used subtle visual manipulations to create visual illusions of causality in morally relevant scenarios, and this systematically changed people's moral judgments. After demonstrating the basic effect using simple displays involving an ambiguous car collision that ends up injuring a person (E1), we show that the effect is sensitive on the millisecond timescale to manipulations of task-irrelevant factors that are known to affect perceived causality, including the duration (E2a) and asynchrony (E2b) of specific task-irrelevant contextual factors in the display. We then conceptually replicate the effect using a different paradigm (E3a), and also show that we can eliminate the effect by interfering with motion processing (E3b). Finally, we show that the effect generalizes across different kinds of moral judgments (E3c). Combined, these studies show that obligatory, abstract inferences made by the visual system influence moral judgments. Copyright © 2018 Elsevier B.V. All rights reserved.
Ip, Ifan Betina; Bridge, Holly; Parker, Andrew J.
2014-01-01
An important advance in the study of visual attention has been the identification of a non-spatial component of attention that enhances the response to similar features or objects across the visual field. Here we test whether this non-spatial component can co-select individual features that are perceptually bound into a coherent object. We combined human psychophysics and functional magnetic resonance imaging (fMRI) to demonstrate the ability to co-select individual features from perceptually coherent objects. Our study used binocular disparity and visual motion to define disparity structure-from-motion (dSFM) stimuli. Although the spatial attention system induced strong modulations of the fMRI response in visual regions, the non-spatial system’s ability to co-select features of the dSFM stimulus was less pronounced and variable across subjects. Our results demonstrate that feature and global feature attention effects are variable across participants, suggesting that the feature attention system may be limited in its ability to automatically select features within the attended object. Careful comparison of the task design suggests that even minor differences in the perceptual task may be critical in revealing the presence of global feature attention. PMID:24936974
Cui, Zhenyu; Gao, Yanjun; Yang, Wenzeng; Zhao, Chunli; Ma, Tao; Shi, Xiaoqiang
2018-01-01
To evaluate the therapeutic effects of visual standard channel combined with F4.8 visual puncture super-mini percutaneous nephrolithotomy (SMP) on multiple renal calculi. The clinical data of 46 patients with multiple renal calculi treated in Affiliated Hospital of Hebei University from October 2015 to September 2016 were retrospectively analyzed. There were 28 males and 18 females aged from 25 to 65 years old, with an average of 42.6. The stone diameters were 3.0-5.2 cm, (4.3 ± 0.8) cm on average. F4.8 visual puncture-assisted balloon expansion was used to establish a standard channel. After visible stones were removed through nephroscopy combined with ultrasound lithotripsy, the stones of other parts were treated through F4.8 visual puncture SMP with holmium laser. Indices such as the total time of channel establishment, surgical time, decreased value of hemoglobin, phase-I stone clearance rate and surgical complications were summarized. Single standard channel was successfully established in all cases with the assistance of F4.8 visual puncture, of whom 24 were combined with a single microchannel, 16 were combined with double microchannels, and six were combined with three microchannels. All patients were placed with nephrostomy tube which was not placed in the microchannels. Both F5 double J tubes were placed after surgery. The time for establishing a standard channel through F4.8 visual puncture was (6.8 ± 1.8) min, and that for establishing a single F4.8 visual puncture microchannel was (4.5 ± 0.9) min. The surgical time was (92 ± 15) min. The phase-I stone clearance rate was 91.3% (42/46), and the decreased value of hemoglobin was (12.21 ± 2.5) g/L. There were 8 cases of postoperative fever which was relieved after anti-inflammatory treatment. Four cases had 0.5-0.8 cm of stone residue in the lower calyx, and all stones were discharged one month after surgery by in vitro shock wave lithotripsy combined with position nephrolithotomy, without stone streets, delayed bleeding, peripheral organ damage or urethral injury. Combining visual standard channel with F4.8 visual puncture SMP for the treatment of multiple renal calculi had the advantages of reducing the number of large channels, high rate of stone clearance, safety and reliability and mild complications. The established F4.8 visual puncture channel was safer and more accurate.
A Physiological Approach to Prolonged Recovery From Sport-Related Concussion
Leddy, John; Baker, John G.; Haider, Mohammad Nadir; Hinds, Andrea; Willer, Barry
2017-01-01
Management of the athlete with postconcussion syndrome (PCS) is challenging because of the nonspecificity of PCS symptoms. Ongoing symptoms reflect prolonged concussion pathophysiology or conditions such as migraine headaches, depression or anxiety, chronic pain, cervical injury, visual dysfunction, vestibular dysfunction, or some combination of these. In this paper, we focus on the physiological signs of concussion to help narrow the differential diagnosis of PCS in athletes. The physiological effects of exercise on concussion are especially important for athletes. Some athletes with PCS have exercise intolerance that may result from altered control of cerebral blood flow. Systematic evaluation of exercise tolerance combined with a physical examination of the neurologic, visual, cervical, and vestibular systems can in many cases identify one or more treatable postconcussion disorders. PMID:28387557
Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!
Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel
2014-01-01
Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371
Hypothesis exploration with visualization of variance
2014-01-01
Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666
Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G
2009-03-01
Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.
Environmental fog/rain visual display system for aircraft simulators
NASA Technical Reports Server (NTRS)
Chase, W. D. (Inventor)
1982-01-01
An environmental fog/rain visual display system for aircraft simulators is described. The electronic elements of the system include a real time digital computer, a caligraphic color display which simulates landing lights of selective intensity, and a color television camera for producing a moving color display of the airport runway as depicted on a model terrain board. The mechanical simulation elements of the system include an environmental chamber which can produce natural fog, nonhomogeneous fog, rain and fog combined, or rain only. A pilot looking through the aircraft wind screen will look through the fog and/or rain generated in the environmental chamber onto a viewing screen with the simulated color image of the airport runway thereon, and observe a very real simulation of actual conditions of a runway as it would appear through actual fog and/or rain.
Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion
Fajen, Brett R.; Matthis, Jonathan S.
2013-01-01
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983
Advances and limitations of visual conditioning protocols in harnessed bees.
Avarguès-Weber, Aurore; Mota, Theo
2016-10-01
Bees are excellent invertebrate models for studying visual learning and memory mechanisms, because of their sophisticated visual system and impressive cognitive capacities associated with a relatively simple brain. Visual learning in free-flying bees has been traditionally studied using an operant conditioning paradigm. This well-established protocol, however, can hardly be combined with invasive procedures for studying the neurobiological basis of visual learning. Different efforts have been made to develop protocols in which harnessed honey bees could associate visual cues with reinforcement, though learning performances remain poorer than those obtained with free-flying animals. Especially in the last decade, the intention of improving visual learning performances of harnessed bees led many authors to adopt distinct visual conditioning protocols, altering parameters like harnessing method, nature and duration of visual stimulation, number of trials, inter-trial intervals, among others. As a result, the literature provides data hardly comparable and sometimes contradictory. In the present review, we provide an extensive analysis of the literature available on visual conditioning of harnessed bees, with special emphasis on the comparison of diverse conditioning parameters adopted by different authors. Together with this comparative overview, we discuss how these diverse conditioning parameters could modulate visual learning performances of harnessed bees. Copyright © 2016 Elsevier Ltd. All rights reserved.
Correlative visualization techniques for multidimensional data
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Goettsche, Craig
1989-01-01
Critical to the understanding of data is the ability to provide pictorial or visual representation of those data, particularly in support of correlative data analysis. Despite the advancement of visualization techniques for scientific data over the last several years, there are still significant problems in bringing today's hardware and software technology into the hands of the typical scientist. For example, there are other computer science domains outside of computer graphics that are required to make visualization effective such as data management. Well-defined, flexible mechanisms for data access and management must be combined with rendering algorithms, data transformation, etc. to form a generic visualization pipeline. A generalized approach to data visualization is critical for the correlative analysis of distinct, complex, multidimensional data sets in the space and Earth sciences. Different classes of data representation techniques must be used within such a framework, which can range from simple, static two- and three-dimensional line plots to animation, surface rendering, and volumetric imaging. Static examples of actual data analyses will illustrate the importance of an effective pipeline in data visualization system.
Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D
2001-07-01
The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.
Hay, L.; Knapp, L.
1996-01-01
Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.
Reitsamer, H; Groiss, H P; Franz, M; Pflug, R
2000-01-31
We present a computer-guided microelectrode positioning system that is routinely used in our laboratory for intracellular electrophysiology and functional staining of retinal neurons. Wholemount preparations of isolated retina are kept in a superfusion chamber on the stage of an inverted microscope. Cells and layers of the retina are visualized by Nomarski interference contrast using infrared light in combination with a CCD camera system. After five-point calibration has been performed the electrode can be guided to any point inside the calibrated volume without moving the retina. Electrode deviations from target cells can be corrected by the software further improving the precision of this system. The good visibility of cells avoids prelabeling with fluorescent dyes and makes it possible to work under completely dark adapted conditions.
77 FR 24833 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-26
... the hydraulic high pressure hose and electrical wiring of the green electrical motor pump (EMP). This... panel; doing a one-time general visual inspection for correct condition and installation of hydraulic... electrical wiring of the green EMPs, which in combination with a system failure, could cause an uncontrolled...
Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.
Friedrich, D T; Sommer, F; Scheithauer, M O; Greve, J; Hoffmann, T K; Schuler, P J
2017-12-01
Objective Advanced transnasal sinus and skull base surgery remains a challenging discipline for head and neck surgeons. Restricted access and space for instrumentation can impede advanced interventions. Thus, we present the combination of an innovative robotic endoscope guidance system and a specific endoscope with adjustable viewing angle to facilitate transnasal surgery in a human cadaver model. Materials and Methods The applicability of the robotic endoscope guidance system with custom foot pedal controller was tested for advanced transnasal surgery on a fresh frozen human cadaver head. Visualization was enabled using a commercially available endoscope with adjustable viewing angle (15-90 degrees). Results Visualization and instrumentation of all paranasal sinuses, including the anterior and middle skull base, were feasible with the presented setup. Controlling the robotic endoscope guidance system was effectively precise, and the adjustable endoscope lens extended the view in the surgical field without the common change of fixed viewing angle endoscopes. Conclusion The combination of a robotic endoscope guidance system and an advanced endoscope with adjustable viewing angle enables bimanual surgery in transnasal interventions of the paranasal sinuses and the anterior skull base in a human cadaver model. The adjustable lens allows for the abandonment of fixed-angle endoscopes, saving time and resources, without reducing the quality of imaging.
Toward a Scalable Visualization System for Network Traffic Monitoring
NASA Astrophysics Data System (ADS)
Malécot, Erwan Le; Kohara, Masayoshi; Hori, Yoshiaki; Sakurai, Kouichi
With the multiplication of attacks against computer networks, system administrators are required to monitor carefully the traffic exchanged by the networks they manage. However, that monitoring task is increasingly laborious because of the augmentation of the amount of data to analyze. And that trend is going to intensify with the explosion of the number of devices connected to computer networks along with the global rise of the available network bandwidth. So system administrators now heavily rely on automated tools to assist them and simplify the analysis of the data. Yet, these tools provide limited support and, most of the time, require highly skilled operators. Recently, some research teams have started to study the application of visualization techniques to the analysis of network traffic data. We believe that this original approach can also allow system administrators to deal with the large amount of data they have to process. In this paper, we introduce a tool for network traffic monitoring using visualization techniques that we developed in order to assist the system administrators of our corporate network. We explain how we designed the tool and some of the choices we made regarding the visualization techniques to use. The resulting tool proposes two linked representations of the network traffic and activity, one in 2D and the other in 3D. As 2D and 3D visualization techniques have different assets, we resulted in combining them in our tool to take advantage of their complementarity. We finally tested our tool in order to evaluate the accuracy of our approach.
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-09-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing numbers of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis relating the multiple visualisation challenges to a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-04-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing amounts of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis, relating the multiple visualisation challenges with a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
Study of the cerrado vegetation in the Federal District area from orbital data. M.S. Thesis
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Aoki, H.; Dossantos, J. R.
1980-01-01
The physiognomic units of cerrado in the area of Distrito Federal (DF) were studied through the visual and automatic analysis of products provided by Multispectral Scanning System (MSS) of LANDSAT. The visual analysis of the multispectral images in black and white, at the 1:250,000 scale, was made based on the texture and tonal patterns. The automatic analysis of the compatible computer tapes (CCT) was made by means of IMAGE-100 system. The following conclusions were obtained: (1) the delimitation of cerrado vegetation forms can be made by the visual and automatic analysis; (2) in the visual analysis, the principal parameter used to discriminate the cerrado forms was the tonal pattern, independently of the year's seasons, and the channel 5 gave better information; (3) in the automatic analysis, the data of the four channels of MSS can be used in the discrimination of the cerrado forms; and (4) in the automatic analysis, the four channels combination possibilities gave more information in the separation of cerrado units when soil types were considered.
Zhang, Jian; Niu, Xin; Yang, Xue-zhi; Zhu, Qing-wen; Li, Hai-yan; Wang, Xuan; Zhang, Zhi-guo; Sha, Hong
2014-09-01
To design the pulse information which includes the parameter of pulse-position, pulse-number, pulse-shape and pulse-force acquisition and analysis system with function of dynamic recognition, and research the digitalization and visualization of some common cardiovascular mechanism of single pulse. To use some flexible sensors to catch the radial artery pressure pulse wave and utilize the high frequency B mode ultrasound scanning technology to synchronously obtain the information of radial extension and axial movement, by the way of dynamic images, then the gathered information was analyzed and processed together with ECG. Finally, the pulse information acquisition and analysis system was established which has the features of visualization and dynamic recognition, and it was applied to serve for ten healthy adults. The new system overcome the disadvantage of one-dimensional pulse information acquisition and process method which was common used in current research area of pulse diagnosis in traditional Chinese Medicine, initiated a new way of pulse diagnosis which has the new features of dynamic recognition, two-dimensional information acquisition, multiplex signals combination and deep data mining. The newly developed system could translate the pulse signals into digital, visual and measurable motion information of vessel.
Three-Dimensional Visualization of Ozone Process Data.
1997-06-18
Scattered Multivariate Data. IEEE Computer Graphics & Applications. 11 (May), 47-55. Odman, M.T. and Ingram, C.L. (1996) Multiscale Air Quality Simulation...the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. MAQSIP is a modular comprehensive air quality modeling system which MCNC...photolyzed back again to nitric oxide. Finally, oxides of 6 nitrogen are terminated through loss or combination into nitric acid, organic nitrates
ERIC Educational Resources Information Center
Ali, Emad; MacFarland, Stephanie Z.; Umbreit, John
2011-01-01
The Picture Exchange Communication System (PECS) is an augmentative and alternative communication (AAC) program used to teach functional requesting and commenting skills to people with disabilities (Bondy & Frost, 1993; Frost & Bondy, 2002). In this study, tangible symbols were added to PECS in teaching requesting to four students (ages 7-14) with…
Combined optical resolution photoacoustic and fluorescence micro-endoscopy
NASA Astrophysics Data System (ADS)
Shao, Peng; Shi, Wei; Hajireza, Parsin; Zemp, Roger J.
2012-02-01
We present a new micro-endoscopy system combining real-time C-scan optical-resolution photoacoustic micro-endoscopy (OR-PAME), and a high-resolution fluorescence micro-endoscopy system for visualizing fluorescently labeled cellular components and optically absorbing microvasculature simultaneously. With a diode-pumped 532-nm fiber laser, the OR-PAM sub-system is capable of imaging with a resolution of ~ 7μm. The fluorescence sub-system consists of a diode laser with 445 nm-centered emissions as the light source, an objective lens and a CCD camera. Proflavine, a FDA approved drug for human use, is used as the fluorescent contrast agent by topical application. The fluorescence system does not require any mechanical scanning. The scanning laser and the diode laser light source share the same light path within an optical fiber bundle containing 30,000 individual single mode fibers. The absorption of Proflavine at 532 nm is low, which mitigates absorption bleaching of the contrast agent by the photoacoustic excitation source. We demonstrate imaging in live murine models. The system is able to provide cellular morphology with cellular resolution co-registered with the structural and functional information given by OR-PAM. Therefore, the system has the potential to serve as a virtual biopsy technique, helping researchers and clinicians visualize angiogenesis, effects of anti-cancer drugs on both cells and the microcirculation, as well as aid in the study of other diseases.
Rosser, James C; Fleming, Jeffrey P; Legare, Timothy B; Choi, Katherine M; Nakagiri, Jamie; Griffith, Elliot
2017-12-22
To design and develop a distance learning (DL) system for the transference of laparoscopic surgery knowledge and skill constructed from off-the-shelf materials and commercially available software. Minimally invasive surgery offers significant benefits over traditional surgical procedures, but adoption rates for many procedures are low. Skill and confidence deficits are two of the culprits. DL combined with simulation training and telementoring may address these issues with scale. The system must be built to meet the instruction requirements of a proven laparoscopic skills course (Top Gun). Thus, the rapid sharing of multimedia educational materials, secure two-way audio/visual communications, and annotation and recording capabilities are requirements for success. These requirements are more in line with telementoring missions than standard distance learning efforts. A DL system with telementor, classroom, and laboratory stations was created. The telementor station consists of a desktop computer and headset with microphone. For the classroom station, a laptop is connected to a digital projector that displays the remote instructor and content. A tripod-mounted webcam provides classroom visualization and a Bluetooth® wireless speaker establishes audio. For the laboratory station, a laptop with universal serial bus (USB) expander is combined with a tabletop laparoscopic skills trainer, a headset with microphone, two webcams and a Bluetooth® speaker. The cameras are mounted on a standard tripod and an adjustable gooseneck camera mount clamp to provide an internal and external view of the training area. Internet meeting software provides audio/visual communications including transmission of educational materials. A DL system was created using off-the-shelf materials and commercially available software. It will allow investigations to evaluate the effectiveness of laparoscopic surgery knowledge and skill transfer utilizing DL techniques.
Toward a hybrid brain-computer interface based on imagined movement and visual attention
NASA Astrophysics Data System (ADS)
Allison, B. Z.; Brunner, C.; Kaiser, V.; Müller-Putz, G. R.; Neuper, C.; Pfurtscheller, G.
2010-04-01
Brain-computer interface (BCI) systems do not work for all users. This article introduces a novel combination of tasks that could inspire BCI systems that are more accurate than conventional BCIs, especially for users who cannot attain accuracy adequate for effective communication. Subjects performed tasks typically used in two BCI approaches, namely event-related desynchronization (ERD) and steady state visual evoked potential (SSVEP), both individually and in a 'hybrid' condition that combines both tasks. Electroencephalographic (EEG) data were recorded across three conditions. Subjects imagined moving the left or right hand (ERD), focused on one of the two oscillating visual stimuli (SSVEP), and then simultaneously performed both tasks. Accuracy and subjective measures were assessed. Offline analyses suggested that half of the subjects did not produce brain patterns that could be accurately discriminated in response to at least one of the two tasks. If these subjects produced comparable EEG patterns when trying to use a BCI, these subjects would not be able to communicate effectively because the BCI would make too many errors. Results also showed that switching to a different task used in BCIs could improve accuracy in some of these users. Switching to a hybrid approach eliminated this problem completely, and subjects generally did not consider the hybrid condition more difficult. Results validate this hybrid approach and suggest that subjects who cannot use a BCI should consider switching to a different BCI approach, especially a hybrid BCI. Subjects proficient with both approaches might combine them to increase information throughput by improving accuracy, reducing selection time, and/or increasing the number of possible commands.
View Combination: A Generalization Mechanism for Visual Recognition
ERIC Educational Resources Information Center
Friedman, Alinda; Waller, David; Thrash, Tyler; Greenauer, Nathan; Hodgson, Eric
2011-01-01
We examined whether view combination mechanisms shown to underlie object and scene recognition can integrate visual information across views that have little or no three-dimensional information at either the object or scene level. In three experiments, people learned four "views" of a two dimensional visual array derived from a three-dimensional…
Idelevich, Evgeny A; Becker, Karsten; Schmitz, Janne; Knaack, Dennis; Peters, Georg; Köck, Robin
2016-01-01
Results of disk diffusion antimicrobial susceptibility testing depend on individual visual reading of inhibition zone diameters. Therefore, automated reading using camera systems might represent a useful tool for standardization. In this study, the ADAGIO automated system (Bio-Rad) was evaluated for reading disk diffusion tests of fastidious bacteria. 144 clinical isolates (68 β-haemolytic streptococci, 28 Streptococcus pneumoniae, 18 viridans group streptococci, 13 Haemophilus influenzae, 7 Moraxella catarrhalis, and 10 Campylobacter jejuni) were tested on Mueller-Hinton agar supplemented with 5% defibrinated horse blood and 20 mg/L β-NAD (MH-F, Oxoid) according to EUCAST. Plates were read manually with a ruler and automatically using the ADAGIO system. Inhibition zone diameters, indicated by the automated system, were visually controlled and adjusted, if necessary. Among 1548 isolate-antibiotic combinations, comparison of automated vs. manual reading yielded categorical agreement (CA) without visual adjustment of the automatically determined zone diameters in 81.4%. In 20% (309 of 1548) of tests it was deemed necessary to adjust the automatically determined zone diameter after visual control. After adjustment, CA was 94.8%; very major errors (false susceptible interpretation), major errors (false resistant interpretation) and minor errors (false categorization involving intermediate result), calculated according to the ISO 20776-2 guideline, accounted to 13.7% (13 of 95 resistant results), 3.3% (47 of 1424 susceptible results) and 1.4% (21 of 1548 total results), respectively, compared to manual reading. The ADAGIO system allowed for automated reading of disk diffusion testing in fastidious bacteria and, after visual validation of the automated results, yielded good categorical agreement with manual reading.
A case for spiking neural network simulation based on configurable multiple-FPGA systems.
Yang, Shufan; Wu, Qiang; Li, Renfa
2011-09-01
Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.
Cavanagh, Patrick
2011-01-01
Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719
Postural and Spatial Orientation Driven by Virtual Reality
Keshner, Emily A.; Kenyon, Robert V.
2009-01-01
Orientation in space is a perceptual variable intimately related to postural orientation that relies on visual and vestibular signals to correctly identify our position relative to vertical. We have combined a virtual environment with motion of a posture platform to produce visual-vestibular conditions that allow us to explore how motion of the visual environment may affect perception of vertical and, consequently, affect postural stabilizing responses. In order to involve a higher level perceptual process, we needed to create a visual environment that was immersive. We did this by developing visual scenes that possess contextual information using color, texture, and 3-dimensional structures. Update latency of the visual scene was close to physiological latencies of the vestibulo-ocular reflex. Using this system we found that even when healthy young adults stand and walk on a stable support surface, they are unable to ignore wide field of view visual motion and they adapt their postural orientation to the parameters of the visual motion. Balance training within our environment elicited measurable rehabilitation outcomes. Thus we believe that virtual environments can serve as a clinical tool for evaluation and training of movement in situations that closely reflect conditions found in the physical world. PMID:19592796
ViA: a perceptual visualization assistant
NASA Astrophysics Data System (ADS)
Healey, Chris G.; St. Amant, Robert; Elhaddad, Mahmoud S.
2000-05-01
This paper describes an automated visualized assistant called ViA. ViA is designed to help users construct perceptually optical visualizations to represent, explore, and analyze large, complex, multidimensional datasets. We have approached this problem by studying what is known about the control of human visual attention. By harnessing the low-level human visual system, we can support our dual goals of rapid and accurate visualization. Perceptual guidelines that we have built using psychophysical experiments form the basis for ViA. ViA uses modified mixed-initiative planning algorithms from artificial intelligence to search of perceptually optical data attribute to visual feature mappings. Our perceptual guidelines are integrated into evaluation engines that provide evaluation weights for a given data-feature mapping, and hints on how that mapping might be improved. ViA begins by asking users a set of simple questions about their dataset and the analysis tasks they want to perform. Answers to these questions are used in combination with the evaluation engines to identify and intelligently pursue promising data-feature mappings. The result is an automatically-generated set of mappings that are perceptually salient, but that also respect the context of the dataset and users' preferences about how they want to visualize their data.
Measurement of suprathreshold binocular interactions in amblyopia.
Mansouri, B; Thompson, B; Hess, R F
2008-12-01
It has been established that in amblyopia, information from the amblyopic eye (AME) is not combined with that from the fellow fixing eye (FFE) under conditions of binocular viewing. However, recent evidence suggests that mechanisms that combine information between the eyes are intact in amblyopia. The lack of binocular function is most likely due to the imbalanced inputs from the two eyes under binocular conditions [Baker, D. H., Meese, T. S., Mansouri, B., & Hess, R. F. (2007b). Binocular summation of contrast remains intact in strabismic amblyopia. Investigative Ophthalmology & Visual Science, 48(11), 5332-5338]. We have measured the extent to which the information presented to each eye needs to differ for binocular combination to occur and in doing so we quantify the influence of interocular suppression. We quantify these suppressive effects for suprathreshold processing of global stimuli for both motion and spatial tasks. The results confirm the general importance of these suppressive effects in rendering the structurally binocular visual system of a strabismic amblyope, functionally monocular.
NASA Technical Reports Server (NTRS)
Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga P.; Stephens, Philip; Wilson, Brian D.; Akopian, Vardan; Komjathy, Attila; Lijima, Byron A.
2013-01-01
ISOGAME is designed and developed to assess quantitatively the impact of new observation systems on the capability of imaging and modeling the ionosphere. With ISOGAME, one can perform observation system simulation experiments (OSSEs). A typical OSSE using ISOGAME would involve: (1) simulating various ionospheric conditions on global scales; (2) simulating ionospheric measurements made from a constellation of low-Earth-orbiters (LEOs), particularly Global Navigation Satellite System (GNSS) radio occultation data, and from ground-based global GNSS networks; (3) conducting ionospheric data assimilation experiments with the Global Assimilative Ionospheric Model (GAIM); and (4) analyzing modeling results with visualization tools. ISOGAME can provide quantitative assessment of the accuracy of assimilative modeling with the interested observation system. Other observation systems besides those based on GNSS are also possible to analyze. The system is composed of a suite of software that combines the GAIM, including a 4D first-principles ionospheric model and data assimilation modules, an Internal Reference Ionosphere (IRI) model that has been developed by international ionospheric research communities, observation simulator, visualization software, and orbit design, simulation, and optimization software. The core GAIM model used in ISOGAME is based on the GAIM++ code (written in C++) that includes a new high-fidelity geomagnetic field representation (multi-dipole). New visualization tools and analysis algorithms for the OSSEs are now part of ISOGAME.
Ethanol-blended gasoline entered the market in response to demand for domestic renewable energy sources, and may result in increased inhalation of ethanol vapors in combination with other volatile gasoline constituents. It is important to understand potential risks of inhalation ...
Health Indicators: A Tool for Program Review
ERIC Educational Resources Information Center
Abou-Sayf, Frank K.
2006-01-01
A visual tool used to evaluate instructional program performance has been designed by the University of Hawaii Community College system. The tool combines features from traffic lights, blood-chemistry test reports, and industry production control charts, and is labeled the Program Health-Indicator Chart. The tool was designed to minimize the labor…
Vector coding of wavelet-transformed images
NASA Astrophysics Data System (ADS)
Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua
1998-09-01
Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.
ERIC Educational Resources Information Center
Scarlatos, Tony
2013-01-01
Exploring the Solar System in the elementary school curriculum has traditionally involved activities, such as building scale models, to help students visualize the vastness of space and the relative size of the planets and their orbits. Today, numerous websites provide a wealth of information about the sun and the planets, combining text, photos,…
ERIC Educational Resources Information Center
Chen, Ching-chih
1991-01-01
Describes compact disc interactive (CD-I) as a multimedia home entertainment system that combines audio, visual, text, graphic, and interactive capabilities. Full-screen video and full-motion video (FMV) are explained, hardware for FMV decoding is described, software is briefly discussed, and CD-I titles planned for future production are listed.…
Constructive, Collaborative, Contextual, and Self-Directed Learning in Surface Anatomy Education
ERIC Educational Resources Information Center
Bergman, Esther M.; Sieben, Judith M.; Smailbegovic, Ida; de Bruin, Anique B. H.; Scherpbier, Albert J. J. A.; van der Vleuten, Cees P. M.
2013-01-01
Anatomy education often consists of a combination of lectures and laboratory sessions, the latter frequently including surface anatomy. Studying surface anatomy enables students to elaborate on their knowledge of the cadaver's static anatomy by enabling the visualization of structures, especially those of the musculoskeletal system, move and…
49 CFR 180.209 - Requirements for requalification of specification cylinders.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Requalification period (years) Eddy current examination combined with visual inspection Eddy current—In accordance... performing eddy current must be familiar with the eddy current equipment and must standardize (calibrate) the system in accordance with the requirements provided in Appendix C to this part. 2 The eddy current must...
49 CFR 180.209 - Requirements for requalification of specification cylinders.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Requalification period (years) Eddy current examination combined with visual inspection Eddy current—In accordance... performing eddy current must be familiar with the eddy current equipment and must standardize (calibrate) the system in accordance with the requirements provided in Appendix C to this part. 2 The eddy current must...
49 CFR 180.209 - Requirements for requalification of specification cylinders.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Requalification period (years) Eddy current examination combined with visual inspection Eddy current—In accordance... performing eddy current must be familiar with the eddy current equipment and must standardize (calibrate) the system in accordance with the requirements provided in Appendix C to this part. 2 The eddy current must...
Dual processing of visual rotation for bipedal stance control.
Day, Brian L; Muller, Timothy; Offord, Joanna; Di Giulio, Irene
2016-10-01
When standing, the gain of the body-movement response to a sinusoidally moving visual scene has been shown to get smaller with faster stimuli, possibly through changes in the apportioning of visual flow to self-motion or environment motion. We investigated whether visual-flow speed similarly influences the postural response to a discrete, unidirectional rotation of the visual scene in the frontal plane. Contrary to expectation, the evoked postural response consisted of two sequential components with opposite relationships to visual motion speed. With faster visual rotation the early component became smaller, not through a change in gain but by changes in its temporal structure, while the later component grew larger. We propose that the early component arises from the balance control system minimising apparent self-motion, while the later component stems from the postural system realigning the body with gravity. The source of visual motion is inherently ambiguous such that movement of objects in the environment can evoke self-motion illusions and postural adjustments. Theoretically, the brain can mitigate this problem by combining visual signals with other types of information. A Bayesian model that achieves this was previously proposed and predicts a decreasing gain of postural response with increasing visual motion speed. Here we test this prediction for discrete, unidirectional, full-field visual rotations in the frontal plane of standing subjects. The speed (0.75-48 deg s(-1) ) and direction of visual rotation was pseudo-randomly varied and mediolateral responses were measured from displacements of the trunk and horizontal ground reaction forces. The behaviour evoked by this visual rotation was more complex than has hitherto been reported, consisting broadly of two consecutive components with respective latencies of ∼190 ms and >0.7 s. Both components were sensitive to visual rotation speed, but with diametrically opposite relationships. Thus, the early component decreased with faster visual rotation, while the later component increased. Furthermore, the decrease in size of the early component was not achieved by a simple attenuation of gain, but by a change in its temporal structure. We conclude that the two components represent expressions of different motor functions, both pertinent to the control of bipedal stance. We propose that the early response stems from the balance control system attempting to minimise unintended body motion, while the later response arises from the postural control system attempting to align the body with gravity. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
2018-01-01
Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962
Effects of visual attention on chromatic and achromatic detection sensitivities.
Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko
2014-05-01
Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.
Visual Odometry for Autonomous Deep-Space Navigation
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.
DEMS - a second generation diabetes electronic management system.
Gorman, C A; Zimmerman, B R; Smith, S A; Dinneen, S F; Knudsen, J B; Holm, D; Jorgensen, B; Bjornsen, S; Planet, K; Hanson, P; Rizza, R A
2000-06-01
Diabetes electronic management system (DEMS) is a component-based client/server application, written in Visual C++ and Visual Basic, with the database server running Sybase System 11. DEMS is built entirely with a combination of dynamic link libraries (DLLs) and ActiveX components - the only exception is the DEMS.exe. DEMS is a chronic disease management system for patients with diabetes. It is used at the point of care by all members of the diabetes team including physicians, nurses, dieticians, clinical assistants and educators. The system is designed for maximum clinical efficiency and facilitates appropriately supervised delegation of care. Dispersed clinical sites may be supervised from a central location. The system is designed for ease of navigation; immediate provision of many types of automatically generated reports; quality audits; aids to compliance with good care guidelines; and alerts, advisories, prompts, and warnings that guide the care provider. The system now contains data on over 34000 patients and is in daily use at multiple sites.
Visualization-based decision support for value-driven system design
NASA Astrophysics Data System (ADS)
Tibor, Elliott
In the past 50 years, the military, communication, and transportation systems that permeate our world, have grown exponentially in size and complexity. The development and production of these systems has seen ballooning costs and increased risk. This is particularly critical for the aerospace industry. The inability to deal with growing system complexity is a crippling force in the advancement of engineered systems. Value-Driven Design represents a paradigm shift in the field of design engineering that has potential to help counteract this trend. The philosophy of Value-Driven Design places the desires of the stakeholder at the forefront of the design process to capture true preferences and reveal system alternatives that were never previously thought possible. Modern aerospace engineering design problems are large, complex, and involve multiple levels of decision-making. To find the best design, the decision-maker is often required to analyze hundreds or thousands of combinations of design variables and attributes. Visualization can be used to support these decisions, by communicating large amounts of data in a meaningful way. Understanding the design space, the subsystem relationships, and the design uncertainties is vital to the advancement of Value-Driven Design as an accepted process for the development of more effective, efficient, robust, and elegant aerospace systems. This research investigates the use of multi-dimensional data visualization tools to support decision-making under uncertainty during the Value-Driven Design process. A satellite design system comprising a satellite, ground station, and launch vehicle is used to demonstrate effectiveness of new visualization methods to aid in decision support during complex aerospace system design. These methods are used to facilitate the exploration of the feasible design space by representing the value impact of system attribute changes and comparing the results of multi-objective optimization formulations with a Value-Driven Design formulation. The visualization methods are also used to assist in the decomposition of a value function, by representing attribute sensitivities to aid with trade-off studies. Lastly, visualization is used to enable greater understanding of the subsystem relationships, by displaying derivative-based couplings, and the design uncertainties, through implementation of utility theory. The use of these visualization methods is shown to enhance the decision-making capabilities of the designer by granting them a more holistic view of the complex design space.
2013-01-01
Background Despite ongoing interest in the neurophysiology of visual systems in scorpions, aspects of their neuroanatomy have received little attention. Lately sets of neuroanatomical characters have contributed important arguments to the discussion of arthropod ground patterns and phylogeny. In various attempts to reconstruct phylogeny (from morphological, morphological + molecular, or molecular data) scorpions were placed either as basalmost Arachnida, or within Arachnida with changing sister-group relationships, or grouped with the extinct Eurypterida and Xiphosura inside the Merostomata. Thus, the position of scorpions is a key to understanding chelicerate evolution. To shed more light on this, the present study for the first time combines various techniques (Cobalt fills, DiI / DiO labelling, osmium-ethyl gallate procedure, and AMIRA 3D-reconstruction) to explore central projections and visual neuropils of median and lateral eyes in Euscorpius italicus (Herbst, 1800) and E. hadzii Di Caporiacco, 1950. Results Scorpion median eye retinula cells are linked to a first and a second visual neuropil, while some fibres additionally connect the median eyes with the arcuate body. The lateral eye retinula cells are linked to a first and a second visual neuropil as well, with the second neuropil being partly shared by projections from both eyes. Conclusions Comparing these results to previous studies on the visual systems of scorpions and other chelicerates, we found striking similarities to the innervation pattern in Limulus polyphemus for both median and lateral eyes. This supports from a visual system point of view at least a phylogenetically basal position of Scorpiones in Arachnida, or even a close relationship to Xiphosura. In addition, we propose a ground pattern for the central projections of chelicerate median eyes. PMID:23842208
On the rules of integration of crowded orientation signals
Põder, Endel
2012-01-01
Crowding is related to an integration of feature signals over an inappropriately large area in the visual periphery. The rules of this integration are still not well understood. This study attempts to understand how the orientation signals from the target and flankers are combined. A target Gabor, together with 2, 4, or 6 flanking Gabors, was briefly presented in a peripheral location (4° eccentricity). The observer's task was to identify the orientation of the target (eight-alternative forced-choice). Performance was found to be nonmonotonically dependent on the target–flanker orientation difference (a drop at intermediate differences). For small target–flanker differences, a strong assimilation bias was observed. An effect of the number of flankers was found for heterogeneous flankers only. It appears that different rules of integration are used, dependent on some salient aspects (target pop-out, homogeneity–heterogeneity) of the stimulus pattern. The strategy of combining simple rules may be explained by the goal of the visual system to encode potentially important aspects of a stimulus with limited processing resources and using statistical regularities of the natural visual environment. PMID:23145295
On the rules of integration of crowded orientation signals.
Põder, Endel
2012-01-01
Crowding is related to an integration of feature signals over an inappropriately large area in the visual periphery. The rules of this integration are still not well understood. This study attempts to understand how the orientation signals from the target and flankers are combined. A target Gabor, together with 2, 4, or 6 flanking Gabors, was briefly presented in a peripheral location (4° eccentricity). The observer's task was to identify the orientation of the target (eight-alternative forced-choice). Performance was found to be nonmonotonically dependent on the target-flanker orientation difference (a drop at intermediate differences). For small target-flanker differences, a strong assimilation bias was observed. An effect of the number of flankers was found for heterogeneous flankers only. It appears that different rules of integration are used, dependent on some salient aspects (target pop-out, homogeneity-heterogeneity) of the stimulus pattern. The strategy of combining simple rules may be explained by the goal of the visual system to encode potentially important aspects of a stimulus with limited processing resources and using statistical regularities of the natural visual environment.
Sex identification in female crayfish is bimodal
NASA Astrophysics Data System (ADS)
Aquiloni, Laura; Massolo, Alessandro; Gherardi, Francesca
2009-01-01
Sex identification has been studied in several species of crustacean decapods but only seldom was the role of multimodality investigated in a systematic fashion. Here, we analyse the effect of single/combined chemical and visual stimuli on the ability of the crayfish Procambarus clarkii to identify the sex of a conspecific during mating interactions. Our results show that crayfish respond to the offered stimuli depending on their sex. While males rely on olfaction alone for sex identification, females require the combination of olfaction and vision to do so. In the latter, chemical and visual stimuli act as non-redundant signal components that possibly enhance the female ability to discriminate potential mates in the crowded social context experienced during mating period. This is one of the few clear examples in invertebrates of non-redundancy in a bimodal communication system.
Multispectral image analysis for object recognition and classification
NASA Astrophysics Data System (ADS)
Viau, C. R.; Payeur, P.; Cretu, A.-M.
2016-05-01
Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.
GenomeGraphs: integrated genomic data visualization with R.
Durinck, Steffen; Bullard, James; Spellman, Paul T; Dudoit, Sandrine
2009-01-06
Biological studies involve a growing number of distinct high-throughput experiments to characterize samples of interest. There is a lack of methods to visualize these different genomic datasets in a versatile manner. In addition, genomic data analysis requires integrated visualization of experimental data along with constantly changing genomic annotation and statistical analyses. We developed GenomeGraphs, as an add-on software package for the statistical programming environment R, to facilitate integrated visualization of genomic datasets. GenomeGraphs uses the biomaRt package to perform on-line annotation queries to Ensembl and translates these to gene/transcript structures in viewports of the grid graphics package. This allows genomic annotation to be plotted together with experimental data. GenomeGraphs can also be used to plot custom annotation tracks in combination with different experimental data types together in one plot using the same genomic coordinate system. GenomeGraphs is a flexible and extensible software package which can be used to visualize a multitude of genomic datasets within the statistical programming environment R.
Adaptive optics without altering visual perception.
Koenig, D E; Hart, N W; Hofer, H J
2014-04-01
Adaptive optics combined with visual psychophysics creates the potential to study the relationship between visual function and the retina at the cellular scale. This potential is hampered, however, by visual interference from the wavefront-sensing beacon used during correction. For example, we have previously shown that even a dim, visible beacon can alter stimulus perception (Hofer et al., 2012). Here we describe a simple strategy employing a longer wavelength (980nm) beacon that, in conjunction with appropriate restriction on timing and placement, allowed us to perform psychophysics when dark adapted without altering visual perception. The method was verified by comparing detection and color appearance of foveally presented small spot stimuli with and without the wavefront beacon present in 5 subjects. As an important caution, we found that significant perceptual interference can occur even with a subliminal beacon when additional measures are not taken to limit exposure. Consequently, the lack of perceptual interference should be verified for a given system, and not assumed based on invisibility of the beacon. Copyright © 2014 Elsevier B.V. All rights reserved.
Foggy perception slows us down
Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H
2012-01-01
Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog—that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system. DOI: http://dx.doi.org/10.7554/eLife.00031.001 PMID:23110253
Testing the distinctiveness of visual imagery and motor imagery in a reach paradigm.
Gabbard, Carl; Ammar, Diala; Cordova, Alberto
2009-01-01
We examined the distinctiveness of motor imagery (MI) and visual imagery (VI) in the context of perceived reachability. The aim was to explore the notion that the two visual modes have distinctive processing properties tied to the two-visual-system hypothesis. The experiment included an interference tactic whereby participants completed two tasks at the same time: a visual or motor-interference task combined with a MI or VI-reaching task. We expected increased error would occur when the imaged task and the interference task were matched (e.g., MI with the motor task), suggesting an association based on the assumption that the two tasks were in competition for space on the same processing pathway. Alternatively, if there were no differences, dissociation could be inferred. Significant increases in the number of errors were found when the modalities for the imaged (both MI and VI) task and the interference task were matched. Therefore, it appears that MI and VI in the context of perceived reachability recruit different processing mechanisms.
Immersive Visual Analytics for Transformative Neutron Scattering Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Daniel, Jamison R; Drouhard, Margaret
The ORNL Spallation Neutron Source (SNS) provides the most intense pulsed neutron beams in the world for scientific research and development across a broad range of disciplines. SNS experiments produce large volumes of complex data that are analyzed by scientists with varying degrees of experience using 3D visualization and analysis systems. However, it is notoriously difficult to achieve proficiency with 3D visualizations. Because 3D representations are key to understanding the neutron scattering data, scientists are unable to analyze their data in a timely fashion resulting in inefficient use of the limited and expensive SNS beam time. We believe a moremore » intuitive interface for exploring neutron scattering data can be created by combining immersive virtual reality technology with high performance data analytics and human interaction. In this paper, we present our initial investigations of immersive visualization concepts as well as our vision for an immersive visual analytics framework that could lower the barriers to 3D exploratory data analysis of neutron scattering data at the SNS.« less
Dual function seal: visualized digital signature for electronic medical record systems.
Yu, Yao-Chang; Hou, Ting-Wei; Chiang, Tzu-Chiang
2012-10-01
Digital signature is an important cryptography technology to be used to provide integrity and non-repudiation in electronic medical record systems (EMRS) and it is required by law. However, digital signatures normally appear in forms unrecognizable to medical staff, this may reduce the trust from medical staff that is used to the handwritten signatures or seals. Therefore, in this paper we propose a dual function seal to extend user trust from a traditional seal to a digital signature. The proposed dual function seal is a prototype that combines the traditional seal and digital seal. With this prototype, medical personnel are not just can put a seal on paper but also generate a visualized digital signature for electronic medical records. Medical Personnel can then look at the visualized digital signature and directly know which medical personnel generated it, just like with a traditional seal. Discrete wavelet transform (DWT) is used as an image processing method to generate a visualized digital signature, and the peak signal to noise ratio (PSNR) is calculated to verify that distortions of all converted images are beyond human recognition, and the results of our converted images are from 70 dB to 80 dB. The signature recoverability is also tested in this proposed paper to ensure that the visualized digital signature is verifiable. A simulated EMRS is implemented to show how the visualized digital signature can be integrity into EMRS.
Eye guidance during real-world scene search: The role color plays in central and peripheral vision.
Nuthmann, Antje; Malcolm, George L
2016-01-01
The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong; Hsiung, Pao-Ann; Wan, Chieh-Hao; Koong, Chorng-Shiuh; Liu, Tang-Kun; Yang, Yuanfan; Lin, Chu-Hsing; Chu, William Cheng-Chung
2009-02-01
A billiard ball tracking system is designed to combine with a visual guide interface to instruct users for a reliable strike. The integrated system runs on a PC platform. The system makes use of a vision system for cue ball, object ball and cue stick tracking. A least-squares error calibration process correlates the real-world and the virtual-world pool ball coordinates for a precise guidance line calculation. Users are able to adjust the cue stick on the pool table according to a visual guidance line instruction displayed on a PC monitor. The ideal visual guidance line extended from the cue ball is calculated based on a collision motion analysis. In addition to calculating the ideal visual guide, the factors influencing selection of the best shot among different object balls and pockets are explored. It is found that a tolerance angle around the ideal line for the object ball to roll into a pocket determines the difficulty of a strike. This angle depends in turn on the distance from the pocket to the object, the distance from the object to the cue ball, and the angle between these two vectors. Simulation results for tolerance angles as a function of these quantities are given. A selected object ball was tested extensively with respect to various geometrical parameters with and without using our integrated system. Players with different proficiency levels were selected for the experiment. The results indicate that all players benefit from our proposed visual guidance system in enhancing their skills, while low-skill players show the maximum enhancement in skill with the help of our system. All exhibit enhanced maximum and average hit-in rates. Experimental results on hit-in rates have shown a pattern consistent with that of the analysis. The hit-in rate is thus tightly connected with the analyzed tolerance angles for sinking object balls into a target pocket. These results prove the efficiency of our system, and the analysis results can be used to attain an efficient game-playing strategy.
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
Horowitz, Seth S; Cheney, Cheryl A; Simmons, James A
2004-01-01
The big brown bat (Eptesicus fuscus) is an aerial-feeding insectivorous species that relies on echolocation to avoid obstacles and to detect flying insects. Spatial perception in the dark using echolocation challenges the vestibular system to function without substantial visual input for orientation. IR thermal video recordings show the complexity of bat flights in the field and suggest a highly dynamic role for the vestibular system in orientation and flight control. To examine this role, we carried out laboratory studies of flight behavior under illuminated and dark conditions in both static and rotating obstacle tests while administering heavy water (D2O) to impair vestibular inputs. Eptesicus carried out complex maneuvers through both fixed arrays of wires and a rotating obstacle array using both vision and echolocation, or when guided by echolocation alone. When treated with D2O in combination with lack of visual cues, bats showed considerable decrements in performance. These data indicate that big brown bats use both vision and echolocation to provide spatial registration for head position information generated by the vestibular system.
Stowasser, Annette; Mohr, Sarah; Buschbeck, Elke; Vilinsky, Ilya
2015-01-01
Students learn best when projects are multidisciplinary, hands-on, and provide ample opportunity for self-driven investigation. We present a teaching unit that leads students to explore relationships between sensory function and ecology. Field studies, which are rare in neurobiology education, are combined with laboratory experiments that assess visual properties of insect eyes, using electroretinography (ERG). Comprised of nearly one million species, insects are a diverse group of animals, living in nearly all habitats and ecological niches. Each of these lifestyles puts different demands on their visual systems, and accordingly, insects display a wide array of eye organizations and specializations. Physiologically relevant differences can be measured using relatively simple extracellular electrophysiological methods that can be carried out with standard equipment, much of which is already in place in most physiology laboratories. The teaching unit takes advantage of the large pool of locally available species, some of which likely show specialized visual properties that can be measured by students. In the course of the experiments, students collect local insects or other arthropods of their choice, are guided to formulate hypotheses about how the visual system of "their" insects might be tuned to the lifestyle of the species, and use ERGs to investigate the insects' visual response dynamics, and both chromatic and temporal properties of the visual system. Students are then guided to interpret their results in both a comparative physiological and ecological context. This set of experiments closely mirrors authentic research and has proven to be a popular, informative and highly engaging teaching tool.
Stowasser, Annette; Mohr, Sarah; Buschbeck, Elke; Vilinsky, Ilya
2015-01-01
Students learn best when projects are multidisciplinary, hands-on, and provide ample opportunity for self-driven investigation. We present a teaching unit that leads students to explore relationships between sensory function and ecology. Field studies, which are rare in neurobiology education, are combined with laboratory experiments that assess visual properties of insect eyes, using electroretinography (ERG). Comprised of nearly one million species, insects are a diverse group of animals, living in nearly all habitats and ecological niches. Each of these lifestyles puts different demands on their visual systems, and accordingly, insects display a wide array of eye organizations and specializations. Physiologically relevant differences can be measured using relatively simple extracellular electrophysiological methods that can be carried out with standard equipment, much of which is already in place in most physiology laboratories. The teaching unit takes advantage of the large pool of locally available species, some of which likely show specialized visual properties that can be measured by students. In the course of the experiments, students collect local insects or other arthropods of their choice, are guided to formulate hypotheses about how the visual system of “their” insects might be tuned to the lifestyle of the species, and use ERGs to investigate the insects’ visual response dynamics, and both chromatic and temporal properties of the visual system. Students are then guided to interpret their results in both a comparative physiological and ecological context. This set of experiments closely mirrors authentic research and has proven to be a popular, informative and highly engaging teaching tool. PMID:26240534
Latent binocular function in amblyopia.
Chadnova, Eva; Reynaud, Alexandre; Clavagnier, Simon; Hess, Robert F
2017-11-01
Recently, psychophysical studies have shown that humans with amblyopia do have binocular function that is not normally revealed due to dominant suppressive interactions under normal viewing conditions. Here we use magnetoencephalography (MEG) combined with dichoptic visual stimulation to investigate the underlying binocular function in humans with amblyopia for stimuli that, because of their temporal properties, would be expected to bypass suppressive effects and to reveal any underlying binocular function. We recorded contrast response functions in visual cortical area V1 of amblyopes and normal observers using a steady state visually evoked responses (SSVER) protocol. We used stimuli that were frequency-tagged at 4Hz and 6Hz that allowed identification of the responses from each eye and were of a sufficiently high temporal frequency (>3Hz) to bypass suppression. To characterize binocular function, we compared dichoptic masking between the two eyes in normal and amblyopic participants as well as interocular phase differences in the two groups. We observed that the primary visual cortex responds less to the stimulation of the amblyopic eye compared to the fellow eye. The pattern of interaction in the amblyopic visual system however was not significantly different between the amblyopic and fellow eyes. However, the amblyopic suppressive interactions were lower than those observed in the binocular system of our normal observers. Furthermore, we identified an interocular processing delay of approximately 20ms in our amblyopic group. To conclude, when suppression is greatly reduced, such as the case with our stimulation above 3Hz, the amblyopic visual system exhibits a lack of binocular interactions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fluctuations of visual awareness: Combining motion-induced blindness with binocular rivalry
Jaworska, Katarzyna; Lages, Martin
2014-01-01
Binocular rivalry (BR) and motion-induced blindness (MIB) are two phenomena of visual awareness where perception alternates between multiple states despite constant retinal input. Both phenomena have been extensively studied, but the underlying processing remains unclear. It has been suggested that BR and MIB involve the same neural mechanism, but how the two phenomena compete for visual awareness in the same stimulus has not been systematically investigated. Here we introduce BR in a dichoptic stimulus display that can also elicit MIB and examine fluctuations of visual awareness over the course of each trial. Exploiting this paradigm we manipulated stimulus characteristics that are known to influence MIB and BR. In two experiments we found that effects on multistable percepts were incompatible with the idea of a common oscillator. The results suggest instead that local and global stimulus attributes can affect the dynamics of each percept differently. We conclude that the two phenomena of visual awareness share basic temporal characteristics but are most likely influenced by processing at different stages within the visual system. PMID:25240063
A web platform for integrated surface water - groundwater modeling and data management
NASA Astrophysics Data System (ADS)
Fatkhutdinov, Aybulat; Stefan, Catalin; Junghanns, Ralf
2016-04-01
Model-based decision support systems are considered to be reliable and time-efficient tools for resources management in various hydrology related fields. However, searching and acquisition of the required data, preparation of the data sets for simulations as well as post-processing, visualization and publishing of the simulations results often requires significantly more work and time than performing the modeling itself. The purpose of the developed software is to combine data storage facilities, data processing instruments and modeling tools in a single platform which potentially can reduce time required for performing simulations, hence decision making. The system is developed within the INOWAS (Innovative Web Based Decision Support System for Water Sustainability under a Changing Climate) project. The platform integrates spatially distributed catchment scale rainfall - runoff, infiltration and groundwater flow models with data storage, processing and visualization tools. The concept is implemented in a form of a web-GIS application and is build based on free and open source components, including the PostgreSQL database management system, Python programming language for modeling purposes, Mapserver for visualization and publishing the data, Openlayers for building the user interface and others. Configuration of the system allows performing data input, storage, pre- and post-processing and visualization in a single not disturbed workflow. In addition, realization of the decision support system in the form of a web service provides an opportunity to easily retrieve and share data sets as well as results of simulations over the internet, which gives significant advantages for collaborative work on the projects and is able to significantly increase usability of the decision support system.
Tu, Yiheng; Huang, Gan; Hung, Yeung Sam; Hu, Li; Hu, Yong; Zhang, Zhiguo
2013-01-01
Event-related potentials (ERPs) are widely used in brain-computer interface (BCI) systems as input signals conveying a subject's intention. A fast and reliable single-trial ERP detection method can be used to develop a BCI system with both high speed and high accuracy. However, most of single-trial ERP detection methods are developed for offline EEG analysis and thus have a high computational complexity and need manual operations. Therefore, they are not applicable to practical BCI systems, which require a low-complexity and automatic ERP detection method. This work presents a joint spatial-time-frequency filter that combines common spatial patterns (CSP) and wavelet filtering (WF) for improving the signal-to-noise (SNR) of visual evoked potentials (VEP), which can lead to a single-trial ERP-based BCI.
An Unmanned Aerial Vehicle Cluster Network Cruise System for Monitor
NASA Astrophysics Data System (ADS)
Jiang, Jirong; Tao, Jinpeng; Xin, Guipeng
2018-06-01
The existing maritime cruising system mainly uses manned motorboats to monitor the quality of coastal water and patrol and maintenance of the navigation -aiding facility, which has the problems of high energy consumption, small range of cruise for monitoring, insufficient information control and low visualization. In recent years, the application of UAS in the maritime field has alleviated the phenomenon above to some extent. A cluster-based unmanned network monitoring cruise system designed in this project uses the floating small UAV self-powered launching platform as a carrier, applys the idea of cluster, and combines the strong controllability of the multi-rotor UAV and the capability to carry customized modules, constituting a unmanned, visualized and normalized monitoring cruise network to realize the functions of maritime cruise, maintenance of navigational-aiding and monitoring the quality of coastal water.
NASA Technical Reports Server (NTRS)
Maxwell, Scott A.; Cooper, Brian; Hartman, Frank; Wright, John; Yen, Jeng; Leger, Chris
2005-01-01
A Mars rover is a complex system, and driving one is a complex endeavor. Rover driver must be intimately familiar with the hardware and software of the mobility system and of the robotic arm. They must rapidly assess threats in the terrain, then creatively combine their knowledge o f the vehicle and its environment to achieve each day's science and engineering objective.
NASA Astrophysics Data System (ADS)
McClatchy, D. M.; Rizzo, E. J.; Krishnaswamy, V.; Kanick, S. C.; Wells, W. A.; Paulsen, K. D.; Pogue, B. W.
2017-02-01
There is a dire clinical need for surgical margin guidance in breast conserving therapy (BCT). We present a multispectral spatial frequency domain imaging (SFDI) system, spanning the visible and near-infrared (NIR) wavelengths, combined with a shielded X-ray computed tomography (CT) system, designed for intraoperative breast tumor margin assessment. While the CT can provide a volumetric visualization of the tumor core and its spiculations, the co-registered SFDI can provide superficial and quantitative information about localized changes tissue morphology from light scattering parameters. These light scattering parameters include both model-based parameters of sub-diffusive light scattering related to the particle size scale distribution and also textural information of the high spatial frequency reflectance. Because the SFDI and CT components are rigidly fixed, a simple transformation can be used to simultaneously display the SFDI and CT data in the same coordinate system. This is achieved through the Visualization Toolkit (vtk) file format in the open-source Slicer medical imaging software package. In this manuscript, the instrumentation, data processing, and preliminary human specimen data will be presented. The ultimate goal of this work is to evaluate this technology in a prospective clinical trial, and the current limitations and engineering solutions to meet this goal will also be discussed.
Hann, Angus; Chu, Kevin; Greenslade, Jaimi; Williams, Julian; Brown, Anthony
2015-01-01
This study aimed to determine if performing cerebrospinal fluid spectrophotometry in addition to visual inspection detects more ruptured cerebral aneurysms than performing cerebrospinal fluid visual inspection alone in patients with a normal head CT scan but suspected of suffering an aneurysmal subarachnoid haemorrhage (SAH). We performed a single-centre retrospective study of patients presenting to the emergency department of a tertiary hospital who underwent both head CT scan and lumbar puncture to exclude SAH. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of an approach utilising both spectrophotometry and visual inspection (combined approach) was compared to visual inspection alone. A total of 409 patients (mean age 37.8 years, 56.2% female) were recruited and six (1.5%) had a cerebral aneurysm on angiography. The sensitivity of visual inspection was 50% (95% confidence interval [CI]: 12.4-82.6%), specificity was 99% (95% CI: 97.5-99.7%), PPV was 42.9% (95% CI: 10.4-81.3%) and NPV was 99.2% (95% CI: 97.8-99.8%). The combined approach had a sensitivity of 100% (95% CI: 54.1-100%), specificity of 79.7% (95% CI: 75.4-83.5%), PPV of 6.8% (95% CI: 2.6-14.3%) and a NPV of 100% (95% CI: 98.8-100%). The sensitivity of the combined approach was not significantly different to that of visual inspection alone (p=0.25). Visual inspection had a significantly higher specificity than the combined approach (p<0.01). The combined approach detected more cases of aneurysmal SAH than visual inspection alone, however the difference in sensitivity was not statistically significant. Visual xanthochromia should prompt angiography because of a superior specificity and PPV. Due to its reduced sensitivity, caution should be applied when using only visual inspection of the supernatant. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Egorova, T S; Smirnova, T S; Romashin, O V; Egorova, I V
2016-01-01
Complicated high myopia is one of the leading causes responsible for the disablement in the children associated with visual impairment especially when it is combined with other pathological conditions and abnormalities among which are disorders of the musculoskeletal system. In the present study, we for the first time examined visually impaired schoolchildren (n=44) who suffered from high myopia complications making use of the computed optical topographer for the evaluation of the state of their vertebral column. The control group consisted of 60 children attending a secondary school. The study revealed various deformations of the musculoskeletal system including scoliosis, misalignment of the pelvis, kyphosis, hyperlordosis, torsion, platypodia, deformation of the lower extremities and the chest. These deformations were more pronounced in the visually impaired schoolchildren in comparison with the children of the same age comprising the control group (p<0,05). It is concluded that the assessment of the state of the vertebral column with the use of the apparatus yields an important information for the elaboration and application of a series of measures for the timely provision of medical assistance needed for the comprehensive rehabilitation of the visually impaired schoolchildren presenting with high myopia complications.
The performance & flow visualization studies of three-dimensional (3-D) wind turbine blade models
NASA Astrophysics Data System (ADS)
Sutrisno, Prajitno, Purnomo, W., Setyawan B.
2016-06-01
Recently, studies on the design of 3-D wind turbine blades have a less attention even though 3-D blade products are widely sold. In contrary, advanced studies in 3-D helicopter blade tip have been studied rigorously. Studies in wind turbine blade modeling are mostly assumed that blade spanwise sections behave as independent two-dimensional airfoils, implying that there is no exchange of momentum in the spanwise direction. Moreover, flow visualization experiments are infrequently conducted. Therefore, a modeling study of wind turbine blade with visualization experiment is needed to be improved to obtain a better understanding. The purpose of this study is to investigate the performance of 3-D wind turbine blade models with backward-forward swept and verify the flow patterns using flow visualization. In this research, the blade models are constructed based on the twist and chord distributions following Schmitz's formula. Forward and backward swept are added to the rotating blades. Based on this, the additional swept would enhance or diminish outward flow disturbance or stall development propagation on the spanwise blade surfaces to give better blade design. Some combinations, i. e., b lades with backward swept, provide a better 3-D favorable rotational force of the rotor system. The performance of the 3-D wind turbine system model is measured by a torque meter, employing Prony's braking system. Furthermore, the 3-D flow patterns around the rotating blade models are investigated by applying "tuft-visualization technique", to study the appearance of laminar, separated, and boundary layer flow patterns surrounding the 3-dimentional blade system.
NASA Technical Reports Server (NTRS)
Shields, N., Jr.; Piccione, F.; Kirkpatrick, M., III; Malone, T. B.
1982-01-01
The combination of human and machine capabilities into an integrated engineering system which is complex and interactive interdisciplinary undertaking is discussed. Human controlled remote systems referred to as teleoperators, are reviewed. The human factors requirements for remotely manned systems are identified. The data were developed in three principal teleoperator laboratories and the visual, manipulator and mobility laboratories are described. Three major sections are identified: (1) remote system components, (2) human operator considerations; and (3) teleoperator system simulation and concept verification.
NASA Astrophysics Data System (ADS)
Bykov, A. A.; Kutuza, I. B.; Zinin, P. V.; Machikhin, A. S.; Troyan, I. A.; Bulatov, K. M.; Batshev, V. I.; Mantrova, Y. V.; Gaponov, M. I.; Prakapenka, V. B.; Sharma, S. K.
2018-01-01
Recently it has been shown that it is possible to measure the two-dimensional distribution of the surface temperature of microscopic specimens. The main component of the system is a tandem imaging acousto-optical tunable filter synchronized with a video camera. In this report, we demonstrate that combining the laser heating system with a tandem imaging acousto-optical tunable filter allows measurement of the temperature distribution under laser heating of the platinum plates as well as a visualization of the infrared laser beam, that is widely used for laser heating in diamond anvil cells.
Zarzecka, Joanna; Gończowski, Krzysztof; Kesek, Barbara; Darczuk, Dagmara; Zapała, Jan
2006-01-01
Local anesthesia is one of the basic and the most often executed interventions in dentistry. This procedure is very stressful for the patients because it is combined with pain. The new systems for delivering local anesthesia in dentistry have revolutionized the technique considerably by its simplify as well as reduction in pain. this study presents the comparison between the local anesthesia delivery systems used in dentistry--The Wand and Injex, taking into consideration pain intensity during performing anesthesia and the intensification of fear before executed anesthesia with the given system. the Visual Analogue Scale (VAS), verbal scale and questionnaires were used to evaluate pain and fear. On the basis of our investigations it can be concluded that there were statistically important differences between men and women in fear intensity combined with the anesthesia procedure--men were less afraid than women. The patients who were anaesthetized with system The WAND declared less fear before similar anesthesia in future. The average value of intensity of pain analyzed with both verbal and visual scales during anaesthetizing with the system Injex (independently from sex) was statistically significantly higher than for system The WAND--respectively 0.57 and 8.55 for The WAND, 2.02 and 32.18 for Injex (p = 0.001). on the basis of the results of this study it can be concluded that the less stressful and painful local anesthesia delivery system is the WAND.
NASA Astrophysics Data System (ADS)
Kuvychko, Igor
2001-10-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.
Cheng, Jiao; Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Bei; Wang, Xingyu; Cichocki, Andrzej
2018-02-13
Brain-computer interface (BCI) systems can allow their users to communicate with the external world by recognizing intention directly from their brain activity without the assistance of the peripheral motor nervous system. The P300-speller is one of the most widely used visual BCI applications. In previous studies, a flip stimulus (rotating the background area of the character) that was based on apparent motion, suffered from less refractory effects. However, its performance was not improved significantly. In addition, a presentation paradigm that used a "zooming" action (changing the size of the symbol) has been shown to evoke relatively higher P300 amplitudes and obtain a better BCI performance. To extend this method of stimuli presentation within a BCI and, consequently, to improve BCI performance, we present a new paradigm combining both the flip stimulus with a zooming action. This new presentation modality allowed BCI users to focus their attention more easily. We investigated whether such an action could combine the advantages of both types of stimuli presentation to bring a significant improvement in performance compared to the conventional flip stimulus. The experimental results showed that the proposed paradigm could obtain significantly higher classification accuracies and bit rates than the conventional flip paradigm (p<0.01).
Motion processing with two eyes in three dimensions.
Rokers, Bas; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C
2011-02-11
The movement of an object toward or away from the head is perhaps the most critical piece of information an organism can extract from its environment. Such 3D motion produces horizontally opposite motions on the two retinae. Little is known about how or where the visual system combines these two retinal motion signals, relative to the wealth of knowledge about the neural hierarchies involved in 2D motion processing and binocular vision. Canonical conceptions of primate visual processing assert that neurons early in the visual system combine monocular inputs into a single cyclopean stream (lacking eye-of-origin information) and extract 1D ("component") motions; later stages then extract 2D pattern motion from the cyclopean output of the earlier stage. Here, however, we show that 3D motion perception is in fact affected by the comparison of opposite 2D pattern motions between the two eyes. Three-dimensional motion sensitivity depends systematically on pattern motion direction when dichoptically viewing gratings and plaids-and a novel "dichoptic pseudoplaid" stimulus provides strong support for use of interocular pattern motion differences by precluding potential contributions from conventional disparity-based mechanisms. These results imply the existence of eye-of-origin information in later stages of motion processing and therefore motivate the incorporation of such eye-specific pattern-motion signals in models of motion processing and binocular integration.
High-frequency Ultrasound Imaging of Mouse Cervical Lymph Nodes.
Walk, Elyse L; McLaughlin, Sarah L; Weed, Scott A
2015-07-25
High-frequency ultrasound (HFUS) is widely employed as a non-invasive method for imaging internal anatomic structures in experimental small animal systems. HFUS has the ability to detect structures as small as 30 µm, a property that has been utilized for visualizing superficial lymph nodes in rodents in brightness (B)-mode. Combining power Doppler with B-mode imaging allows for measuring circulatory blood flow within lymph nodes and other organs. While HFUS has been utilized for lymph node imaging in a number of mouse model systems, a detailed protocol describing HFUS imaging and characterization of the cervical lymph nodes in mice has not been reported. Here, we show that HFUS can be adapted to detect and characterize cervical lymph nodes in mice. Combined B-mode and power Doppler imaging can be used to detect increases in blood flow in immunologically-enlarged cervical nodes. We also describe the use of B-mode imaging to conduct fine needle biopsies of cervical lymph nodes to retrieve lymph tissue for histological analysis. Finally, software-aided steps are described to calculate changes in lymph node volume and to visualize changes in lymph node morphology following image reconstruction. The ability to visually monitor changes in cervical lymph node biology over time provides a simple and powerful technique for the non-invasive monitoring of cervical lymph node alterations in preclinical mouse models of oral cavity disease.
Lalys, Florent; Riffaud, Laurent; Bouget, David; Jannin, Pierre
2012-01-01
The need for a better integration of the new generation of Computer-Assisted-Surgical (CAS) systems has been recently emphasized. One necessity to achieve this objective is to retrieve data from the Operating Room (OR) with different sensors, then to derive models from these data. Recently, the use of videos from cameras in the OR has demonstrated its efficiency. In this paper, we propose a framework to assist in the development of systems for the automatic recognition of high level surgical tasks using microscope videos analysis. We validated its use on cataract procedures. The idea is to combine state-of-the-art computer vision techniques with time series analysis. The first step of the framework consisted in the definition of several visual cues for extracting semantic information, therefore characterizing each frame of the video. Five different pieces of image-based classifiers were therefore implemented. A step of pupil segmentation was also applied for dedicated visual cue detection. Time series classification algorithms were then applied to model time-varying data. Dynamic Time Warping (DTW) and Hidden Markov Models (HMM) were tested. This association combined the advantages of all methods for better understanding of the problem. The framework was finally validated through various studies. Six binary visual cues were chosen along with 12 phases to detect, obtaining accuracies of 94%. PMID:22203700
Piao, Jin-Chun; Kim, Shin-Dug
2017-11-07
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Visual Attention Model Based on Statistical Properties of Neuron Responses
Duan, Haibin; Wang, Xiaohua
2015-01-01
Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention. PMID:25747859
Simulating spatial and temporal context of forest management using hypothetical landscapes
Eric J. Gustafson; Thomas R. Crow
1998-01-01
Spatially explicit models that combine remote sensing with geographic information systems (GIS) offer great promise to land managers because they consider the arrangement of landscape elements in time and space. Their visual and geographic nature facilitate the comparison of alternative landscape designs. Among various activities associated with forest management,...
NASA Technical Reports Server (NTRS)
Gilliland, M. G.; Rougelot, R. S.; Schumaker, R. A.
1966-01-01
Video signal processor uses special-purpose integrated circuits with nonsaturating current mode switching to accept texture and color information from a digital computer in a visual spaceflight simulator and to combine these, for display on color CRT with analog information concerning fading.
The use of gasolines blended with a range of ethanol concentrations may result in inhalation of vapors containing a variable combination of ethanol with other volatile gasoline constituents. The possibility of exposure and potential interactions between vapor constituents suggest...
Visual Simulation of Microalgae Growth in Bioregenerative Life Support System
NASA Astrophysics Data System (ADS)
Zhao, Ming
Bioregenerative life support system is one of the key technologies for future human deep space exploration and long-term space missions. BLSS use biological system as its core unit in combination with other physical and chemical equipments, under the proper control and manipulation by crew to complete a specific task to support life. Food production, waste treatment, oxygen and water regeneration are all conducted by higher plants or microalgae in BLSS, which is the most import characteristic different from other kinds of life support systems. Microalgae is light autotrophic micro-organisms, light undoubtedly is the most import factor which limits its growth and reproduction. Increasing or decreasing the light intensity changes the growth rate of microalgae, and then regulates the concentration of oxygen and carbon dioxide in the system. In this paper, based on the mathematical model of microalgae which grew under the different light intensity, three-dimensional visualization model was built and realized through using 3ds max, Virtools and some other three dimensional software, in order to display its change and impacting on oxygen and carbon dioxide intuitively. We changed its model structure and parameters, such as establishing closed-loop control system, light intensity, temperature and Nutrient fluid’s velocity and so on, carried out computer virtual simulation, and observed dynamic change of system with the aim of providing visualization support for system research.
Ultra-precise Masses and Magnitudes for the Gliese 268 M-dwarf Binary
NASA Astrophysics Data System (ADS)
Barry, R. K.; Demory, B. O.; Ségransan, D.; Forveille, T.; Danchi, W. C.; di Folco, E.; Queloz, D.; Torres, G.; Traub, W. A.; Delfosse, X.; Mayor, M.; Perrier, C.; Udry, S.
2009-02-01
Recent advances in astrometry using interferometry and precision radial velocity techniques combined allow for a significant improvement in the precision of masses of M-dwarf stars in visual systems. We report recent astrometric observations of Gliese 268, an M-dwarf binary with a 10.4 day orbital period, with the IOTA interferometer and radial velocity observations with the ELODIE instrument. Combining these measurements leads to preliminary masses of the constituent stars with uncertainties of 0.4%. The masses of the components are 0.22596+/-0.00084 Msolar for the primary and 0.19230+/-0.00071 Msolar for the secondary. The system parallax is determined by these observations to be 0.1560+/-.0030 arcsec (2.0% uncertainty) and is within Hipparcos error bars (0.1572+/-.0033). We tested these physical parameters, along with the near-infrared luminosities of the stars, against stellar evolution models for low-mass stars. Discrepancies between the measured and theoretical values point toward a low-level departure from the predictions. These results are among the most precise masses measured for visual binaries.
Kulkarni, Abhishek; Ertekin, Deniz; Lee, Chi-Hon; Hummel, Thomas
2016-03-17
The precise recognition of appropriate synaptic partner neurons is a critical step during neural circuit assembly. However, little is known about the developmental context in which recognition specificity is important to establish synaptic contacts. We show that in the Drosophila visual system, sequential segregation of photoreceptor afferents, reflecting their birth order, lead to differential positioning of their growth cones in the early target region. By combining loss- and gain-of-function analyses we demonstrate that relative differences in the expression of the transcription factor Sequoia regulate R cell growth cone segregation. This initial growth cone positioning is consolidated via cell-adhesion molecule Capricious in R8 axons. Further, we show that the initial growth cone positioning determines synaptic layer selection through proximity-based axon-target interactions. Taken together, we demonstrate that birth order dependent pre-patterning of afferent growth cones is an essential pre-requisite for the identification of synaptic partner neurons during visual map formation in Drosophila.
Reljin, Branimir; Milosević, Zorica; Stojić, Tomislav; Reljin, Irini
2009-01-01
Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal "images" are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph.
Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael
2013-09-01
Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.
Parallel Rendering of Large Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Garbutt, Alexander E.
2005-01-01
Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.
Big data and visual analytics in anaesthesia and health care.
Simpao, A F; Ahumada, L M; Rehman, M A
2015-09-01
Advances in computer technology, patient monitoring systems, and electronic health record systems have enabled rapid accumulation of patient data in electronic form (i.e. big data). Organizations such as the Anesthesia Quality Institute and Multicenter Perioperative Outcomes Group have spearheaded large-scale efforts to collect anaesthesia big data for outcomes research and quality improvement. Analytics--the systematic use of data combined with quantitative and qualitative analysis to make decisions--can be applied to big data for quality and performance improvements, such as predictive risk assessment, clinical decision support, and resource management. Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces, and it can facilitate performance of cognitive activities involving big data. Ongoing integration of big data and analytics within anaesthesia and health care will increase demand for anaesthesia professionals who are well versed in both the medical and the information sciences. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris
This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.
Holistic neural coding of Chinese character forms in bilateral ventral visual system.
Mo, Ce; Yu, Mengxia; Seger, Carol; Mo, Lei
2015-02-01
How are Chinese characters recognized and represented in the brain of skilled readers? Functional MRI fast adaptation technique was used to address this question. We found that neural adaptation effects were limited to identical characters in bilateral ventral visual system while no activation reduction was observed for partially overlapping characters regardless of the spatial location of the shared sub-character components, suggesting highly selective neuronal tuning to whole characters. The consistent neural profile across the entire ventral visual cortex indicates that Chinese characters are represented as mutually distinctive wholes rather than combinations of sub-character components, which presents a salient contrast to the left-lateralized, simple-to-complex neural representations of alphabetic words. Our findings thus revealed the cultural modulation effect on both local neuronal activity patterns and functional anatomical regions associated with written symbol recognition. Moreover, the cross-language discrepancy in written symbol recognition mechanism might stem from the language-specific early-stage learning experience. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Digitization and Visualization of Greenhouse Tomato Plants in Indoor Environments
Li, Dawei; Xu, Lihong; Tan, Chengxiang; Goodman, Erik D.; Fu, Daichang; Xin, Longjiao
2015-01-01
This paper is concerned with the digitization and visualization of potted greenhouse tomato plants in indoor environments. For the digitization, an inexpensive and efficient commercial stereo sensor—a Microsoft Kinect—is used to separate visual information about tomato plants from background. Based on the Kinect, a 4-step approach that can automatically detect and segment stems of tomato plants is proposed, including acquisition and preprocessing of image data, detection of stem segments, removing false detections and automatic segmentation of stem segments. Correctly segmented texture samples including stems and leaves are then stored in a texture database for further usage. Two types of tomato plants—the cherry tomato variety and the ordinary variety are studied in this paper. The stem detection accuracy (under a simulated greenhouse environment) for the cherry tomato variety is 98.4% at a true positive rate of 78.0%, whereas the detection accuracy for the ordinary variety is 94.5% at a true positive of 72.5%. In visualization, we combine L-system theory and digitized tomato organ texture data to build realistic 3D virtual tomato plant models that are capable of exhibiting various structures and poses in real time. In particular, we also simulate the growth process on virtual tomato plants by exerting controls on two L-systems via parameters concerning the age and the form of lateral branches. This research may provide useful visual cues for improving intelligent greenhouse control systems and meanwhile may facilitate research on artificial organisms. PMID:25675284
Digitization and visualization of greenhouse tomato plants in indoor environments.
Li, Dawei; Xu, Lihong; Tan, Chengxiang; Goodman, Erik D; Fu, Daichang; Xin, Longjiao
2015-02-10
This paper is concerned with the digitization and visualization of potted greenhouse tomato plants in indoor environments. For the digitization, an inexpensive and efficient commercial stereo sensor-a Microsoft Kinect-is used to separate visual information about tomato plants from background. Based on the Kinect, a 4-step approach that can automatically detect and segment stems of tomato plants is proposed, including acquisition and preprocessing of image data, detection of stem segments, removing false detections and automatic segmentation of stem segments. Correctly segmented texture samples including stems and leaves are then stored in a texture database for further usage. Two types of tomato plants-the cherry tomato variety and the ordinary variety are studied in this paper. The stem detection accuracy (under a simulated greenhouse environment) for the cherry tomato variety is 98.4% at a true positive rate of 78.0%, whereas the detection accuracy for the ordinary variety is 94.5% at a true positive of 72.5%. In visualization, we combine L-system theory and digitized tomato organ texture data to build realistic 3D virtual tomato plant models that are capable of exhibiting various structures and poses in real time. In particular, we also simulate the growth process on virtual tomato plants by exerting controls on two L-systems via parameters concerning the age and the form of lateral branches. This research may provide useful visual cues for improving intelligent greenhouse control systems and meanwhile may facilitate research on artificial organisms.
Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland
2011-01-01
Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.
Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.
Kim, Jeesun; Davis, Chris; Groot, Christopher
2009-12-01
This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.
Dekiff, Markus; Berssenbrügge, Philipp; Kemper, Björn; Denz, Cornelia; Dirksen, Dieter
2015-12-01
A metrology system combining three laser speckle measurement techniques for simultaneous determination of 3D shape and micro- and macroscopic deformations is presented. While microscopic deformations are determined by a combination of Digital Holographic Interferometry (DHI) and Digital Speckle Photography (DSP), macroscopic 3D shape, position and deformation are retrieved by photogrammetry based on digital image correlation of a projected laser speckle pattern. The photogrammetrically obtained data extend the measurement range of the DHI-DSP system and also increase the accuracy of the calculation of the sensitivity vector. Furthermore, a precise assignment of microscopic displacements to the object's macroscopic shape for enhanced visualization is achieved. The approach allows for fast measurements with a simple setup. Key parameters of the system are optimized, and its precision and measurement range are demonstrated. As application examples, the deformation of a mandible model and the shrinkage of dental impression material are measured.
Automatic medical image annotation and keyword-based image retrieval using relevance feedback.
Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal
2012-08-01
This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.
Unhealthy behaviours and risk of visual impairment: The CONSTANCES population-based cohort.
Merle, Bénédicte M J; Moreau, Gwendoline; Ozguler, Anna; Srour, Bernard; Cougnard-Grégoire, Audrey; Goldberg, Marcel; Zins, Marie; Delcourt, Cécile
2018-04-26
Unhealthy behaviours are linked to a higher risk of eye diseases, but their combined effect on visual function is unknown. We aimed to examine the individual and combined associations of diet, physical activity, smoking and alcohol consumption with visual impairment among French adults. 38 903 participants aged 18-73 years from the CONSTANCES nationwide cohort (2012-2016) with visual acuity measured and who completed, lifestyle, medical and food frequency questionnaires were included. Visual impairment was defined as a presenting visual acuity <20/40 in the better eye. After full multivariate adjustment, the odds for visual impairment increased with decreasing diet quality (p for trend = 0.04), decreasing physical activity (p for trend = 0.02) and increasing smoking pack-years (p for trend = 0.03), whereas no statistically significant association with alcohol consumption was found. Combination of several unhealthy behaviours was associated with increasing odds for visual impairment (p for trend = 0.0002), with a fully-adjusted odds ratio of 1.81 (95% CI 1.18 to 2.79) for participants reporting 2 unhealthy behaviours and 2.92 (95% CI 1.60 to 5.32) for those reporting 3 unhealthy behaviours. An unhealthy lifestyle including low/intermediate diet quality, low physical activity and heavy smoking was associated with visual impairment in this large population-based study.
High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment
NASA Astrophysics Data System (ADS)
Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.
2006-12-01
The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.
Influence of background size, luminance and eccentricity on different adaptation mechanisms
Gloriani, Alejandro H.; Matesanz, Beatriz M.; Barrionuevo, Pablo A.; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A.
2016-01-01
Mechanisms of light adaptation have been traditionally explained with reference to psychophysical experimentation. However, the neural substrata involved in those mechanisms remain to be elucidated. Our study analyzed links between psychophysical measurements and retinal physiological evidence with consideration for the phenomena of rod-cone interactions, photon noise, and spatial summation. Threshold test luminances were obtained with steady background fields at mesopic and photopic light levels (i.e., 0.06–110 cd/m2) for retinal eccentricities from 0° to 15° using three combinations of background/test field sizes (i.e., 10°/2°, 10°/0.45°, and 1°/0.45°). A two-channel Maxwellian view optical system was employed to eliminate pupil effects on the measured thresholds. A model based on visual mechanisms that were described in the literature was optimized to fit the measured luminance thresholds in all experimental conditions. Our results can be described by a combination of visual mechanisms. We determined how spatial summation changed with eccentricity and how subtractive adaptation changed with eccentricity and background field size. According to our model, photon noise plays a significant role to explain contrast detection thresholds measured with the 1/0.45° background/test size combination at mesopic luminances and at off-axis eccentricities. In these conditions, our data reflect the presence of rod-cone interaction for eccentricities between 6° and 9° and luminances between 0.6 and 5 cd/m2. In spite of the increasing noise effects with eccentricity, results also show that the visual system tends to maintain a constant signal-to-noise ratio in the off-axis detection task over the whole mesopic range. PMID:27210038
Influence of background size, luminance and eccentricity on different adaptation mechanisms.
Gloriani, Alejandro H; Matesanz, Beatriz M; Barrionuevo, Pablo A; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A
2016-08-01
Mechanisms of light adaptation have been traditionally explained with reference to psychophysical experimentation. However, the neural substrata involved in those mechanisms remain to be elucidated. Our study analyzed links between psychophysical measurements and retinal physiological evidence with consideration for the phenomena of rod-cone interactions, photon noise, and spatial summation. Threshold test luminances were obtained with steady background fields at mesopic and photopic light levels (i.e., 0.06-110cd/m(2)) for retinal eccentricities from 0° to 15° using three combinations of background/test field sizes (i.e., 10°/2°, 10°/0.45°, and 1°/0.45°). A two-channel Maxwellian view optical system was employed to eliminate pupil effects on the measured thresholds. A model based on visual mechanisms that were described in the literature was optimized to fit the measured luminance thresholds in all experimental conditions. Our results can be described by a combination of visual mechanisms. We determined how spatial summation changed with eccentricity and how subtractive adaptation changed with eccentricity and background field size. According to our model, photon noise plays a significant role to explain contrast detection thresholds measured with the 1/0.45° background/test size combination at mesopic luminances and at off-axis eccentricities. In these conditions, our data reflect the presence of rod-cone interaction for eccentricities between 6° and 9° and luminances between 0.6 and 5cd/m(2). In spite of the increasing noise effects with eccentricity, results also show that the visual system tends to maintain a constant signal-to-noise ratio in the off-axis detection task over the whole mesopic range. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Feng; Zhang, Xin; Xie, Jun
2015-03-10
This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The goal of this study is to increase the number of targets using fewer stimulation high frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. The new paradigm is High-Frequency Combination Coding-Based High-Frequency Steady-State Visual Evoked Potential (HFCC-SSVEP).Firstly, we studied SSVEP high frequency(beyond 25 Hz)response of SSVEP, whose paradigm is presented on the LED. The SNR (Signal to Noise Ratio) of high frequency(beyond 40 Hz) response is very low, which is been unable to be distinguished through the traditional analysis method;more » Secondly we investigated the HFCC-SSVEP response (beyond 25 Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz), HFCC-SSVEP produces n{sup n} with n high stimulation frequencies through Frequence Combination Code. Further, Animproved Hilbert-huang transform (IHHT)-based variable frequency EEG feature extraction method and a local spectrum extreme target identification algorithmare adopted to extract time-frequency feature of the proposed HFCC-SSVEP response.Linear predictions and fixed sifting (iterating) 10 time is used to overcome the shortage of end effect and stopping criterion,generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals, the improved HHT-based feature extraction method for the proposed SSVEP paradigm in this study increases recognition efficiency, so as to improve ITR and to increase the stability of the BCI system. what is more, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures, So as to ensure the system efficiency and undamaging.This study tests three subjects in order to verify the feasibility of the proposed method.« less
From genes to brain oscillations: is the visual pathway the epigenetic clue to schizophrenia?
González-Hernández, J A; Pita-Alcorta, C; Cedeño, I R
2006-01-01
Molecular data and gene expression data and recently mitochondrial genes and possible epigenetic regulation by non-coding genes is revolutionizing our views on schizophrenia. Genes and epigenetic mechanisms are triggered by cell-cell interaction and by external stimuli. A number of recent clinical and molecular observations indicate that epigenetic factors may be operational in the origin of the illness. Based on the molecular insights, gene expression profiles and epigenetic regulation of gene, we went back to the neurophysiology (brain oscillations) and found a putative role of the visual experiences (i.e. visual stimuli) as epigenetic factor. The functional evidences provided here, establish a direct link between the striate and extrastriate unimodal visual cortex and the neurobiology of the schizophrenia. This result support the hypothesis that 'visual experience' has a potential role as epigenetic factor and contribute to trigger and/or to maintain the progression of the schizophrenia. In this case, candidate genes sensible for the visual 'insult' may be located within the visual cortex including associative areas, while the integrity of the visual pathway before reaching the primary visual cortex is preserved. The same effect can be perceived if target genes are localised within the visual pathway, which actually, is more sensitive for 'insult' during the early life than the cortex per se. If this process affects gene expression at these sites a stably sensory specific 'insult', i.e. distorted visual information, is entering the visual system and expanded to fronto-temporo-parietal multimodal areas even from early maturation periods. The difference in the timing of postnatal neuroanatomical events between such areas and the primary visual cortex in humans (with the formers reaching the same development landmarks later in life than the latter) is 'optimal' to establish an abnormal 'cell- communication' mediated by the visual system that may further interfere with the local physiology. In this context the strategy to search target genes need to be rearrangement and redirected to visual-related genes. Otherwise, psychophysics studies combining functional neuroimage, and electrophysiology are strongly recommended, for the search of epigenetic clues that will allow to carrier gene association studies in schizophrenia.
NASA Astrophysics Data System (ADS)
Li, Yonghui; Ullrich, Carsten
2013-03-01
The time-dependent transition density matrix (TDM) is a useful tool to visualize and interpret the induced charges and electron-hole coherences of excitonic processes in large molecules. Combined with time-dependent density functional theory on a real-space grid (as implemented in the octopus code), the TDM is a computationally viable visualization tool for optical excitation processes in molecules. It provides real-time maps of particles and holes which gives information on excitations, in particular those that have charge-transfer character, that cannot be obtained from the density alone. Some illustration of the TDM and comparison with standard density difference plots will be shown for photoexcited organic donor-acceptor molecules. This work is supported by NSF Grant DMR-1005651
A description of discrete internal representation schemes for visual pattern discrimination.
Foster, D H
1980-01-01
A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.
Combining textual and visual information for image retrieval in the medical domain.
Gkoufas, Yiannis; Morou, Anna; Kalamboukis, Theodore
2011-01-01
In this article we have assembled the experience obtained from our participation in the imageCLEF evaluation task over the past two years. Exploitation on the use of linear combinations for image retrieval has been attempted by combining visual and textual sources of images. From our experiments we conclude that a mixed retrieval technique that applies both textual and visual retrieval in an interchangeably repeated manner improves the performance while overcoming the scalability limitations of visual retrieval. In particular, the mean average precision (MAP) has increased from 0.01 to 0.15 and 0.087 for 2009 and 2010 data, respectively, when content-based image retrieval (CBIR) is performed on the top 1000 results from textual retrieval based on natural language processing (NLP).
Faceted Visualization of Three Dimensional Neuroanatomy By Combining Ontology with Faceted Search
Veeraraghavan, Harini; Miller, James V.
2013-01-01
In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset. PMID:24006207
Faceted visualization of three dimensional neuroanatomy by combining ontology with faceted search.
Veeraraghavan, Harini; Miller, James V
2014-04-01
In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset.
NASA Astrophysics Data System (ADS)
Peterson, David
2005-11-01
The audio and visual capabilities of the planetarium at Francis Marion University were upgraded in Fall 2004 to incorporate three Barco CRT projectors and surround sound. Controlled by the Astro-FX media manager system developed by Bowen Technovation, the projectors focus on the 33 foot dome installed in 1978 for the Spitz 512 Star projector. The significant additional capabilities of the new combined systems will be presented together with a review of the planetarium renovation procedure.
Relating interesting quantitative time series patterns with text events and text features
NASA Astrophysics Data System (ADS)
Wanner, Franz; Schreck, Tobias; Jentner, Wolfgang; Sharalieva, Lyubka; Keim, Daniel A.
2013-12-01
In many application areas, the key to successful data analysis is the integrated analysis of heterogeneous data. One example is the financial domain, where time-dependent and highly frequent quantitative data (e.g., trading volume and price information) and textual data (e.g., economic and political news reports) need to be considered jointly. Data analysis tools need to support an integrated analysis, which allows studying the relationships between textual news documents and quantitative properties of the stock market price series. In this paper, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which reflect quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a-priori method. First, based on heuristics we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a-priori method supports the discovery of such sequential temporal patterns. Then, various text features like the degree of sentence nesting, noun phrase complexity, the vocabulary richness, etc. are extracted from the news to obtain meta patterns. Meta patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time-, cluster- and sequence visualization and analysis functionality. We provide two case studies, showing the effectiveness of our combined quantitative and textual analysis work flow. The workflow can also be generalized to other application domains such as data analysis of smart grids, cyber physical systems or the security of critical infrastructure, where the data consists of a combination of quantitative and textual time series data.
Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek
2018-04-26
Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.
Multimedia Analysis plus Visual Analytics = Multimedia Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinchor, Nancy; Thomas, James J.; Wong, Pak C.
2010-10-01
Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.
Initial neuro-ophthalmological manifestations in Churg–Strauss syndrome
Vallet, Anne-Evelyne; Didelot, Adrien; Guebre-Egziabher, Fitsum; Bernard, Martine; Mauguière, François
2010-01-01
Churg–Strauss syndrome (CSS) is a systemic vasculitis with frequent respiratory tract involvement. It can also affect the nervous system, notably the optic tract. The present work reports the case of a 65-year-old man diagnosed as having CSS in the context of several acute onset neurological symptoms including muscle weakness and signs of temporal arteritis, including bilateral anterior ischaemic optic neuropathy (ON). Electroretinograms (ERGs) and visual evoked potentials (VEPs) were performed. Flash ERGs were normal whereas VEPs were highly abnormal, showing a dramatic voltage reduction, thus confirming the ON. The vision outcome was poor. Ophthalmological presentations of CSS have rarely been reported, but no previous case of sudden blindness documented by combined ERG and VEP investigations were found in the literature. The present case strongly suggests that the occurrence of visual loss in the context of systemic inflammation with hypereosinophilia should lead to considering the diagnosis of CSS. PMID:22789694
NASA Technical Reports Server (NTRS)
Westmoreland, Sally; Stow, Douglas A.
1992-01-01
A framework is proposed for analyzing ancillary data and developing procedures for incorporating ancillary data to aid interactive identification of land-use categories in land-use updates. The procedures were developed for use within an integrated image processsing/geographic information systems (GIS) that permits simultaneous display of digital image data with the vector land-use data to be updated. With such systems and procedures, automated techniques are integrated with visual-based manual interpretation to exploit the capabilities of both. The procedural framework developed was applied as part of a case study to update a portion of the land-use layer in a regional scale GIS. About 75 percent of the area in the study site that experienced a change in land use was correctly labeled into 19 categories using the combination of automated and visual interpretation procedures developed in the study.
Novel Models of Visual Topographic Map Alignment in the Superior Colliculus
El-Ghazawi, Tarek A.; Triplett, Jason W.
2016-01-01
The establishment of precise neuronal connectivity during development is critical for sensing the external environment and informing appropriate behavioral responses. In the visual system, many connections are organized topographically, which preserves the spatial order of the visual scene. The superior colliculus (SC) is a midbrain nucleus that integrates visual inputs from the retina and primary visual cortex (V1) to regulate goal-directed eye movements. In the SC, topographically organized inputs from the retina and V1 must be aligned to facilitate integration. Previously, we showed that retinal input instructs the alignment of V1 inputs in the SC in a manner dependent on spontaneous neuronal activity; however, the mechanism of activity-dependent instruction remains unclear. To begin to address this gap, we developed two novel computational models of visual map alignment in the SC that incorporate distinct activity-dependent components. First, a Correlational Model assumes that V1 inputs achieve alignment with established retinal inputs through simple correlative firing mechanisms. A second Integrational Model assumes that V1 inputs contribute to the firing of SC neurons during alignment. Both models accurately replicate in vivo findings in wild type, transgenic and combination mutant mouse models, suggesting either activity-dependent mechanism is plausible. In silico experiments reveal distinct behaviors in response to weakening retinal drive, providing insight into the nature of the system governing map alignment depending on the activity-dependent strategy utilized. Overall, we describe novel computational frameworks of visual map alignment that accurately model many aspects of the in vivo process and propose experiments to test them. PMID:28027309
Vision-related fitness to drive mobility scooters: A practical driving test.
Cordes, Christina; Heutink, Joost; Tucha, Oliver M; Brookhuis, Karel A; Brouwer, Wiebo H; Melis-Dankers, Bart J M
2017-03-06
To investigate practical fitness to drive mobility scooters, comparing visually impaired participants with healthy controls. Between-subjects design. Forty-six visually impaired (13 with very low visual acuity, 10 with low visual acuity, 11 with peripheral field defects, 12 with multiple visual impairment) and 35 normal-sighted controls. Participants completed a practical mobility scooter test-drive, which was recorded on video. Two independent occupational therapists specialized in orientation and mobility evaluated the videos systematically. Approximately 90% of the visually impaired participants passed the driving test. On average, participants with visual impairments performed worse than normal-sighted controls, but were judged sufficiently safe. In particular, difficulties were observed in participants with peripheral visual field defects and those with a combination of low visual acuity and visual field defects. People with visual impairment are, in practice, fit to drive mobility scooters; thus visual impairment on its own should not be viewed as a determinant of safety to drive mobility scooters. However, special attention should be paid to individuals with visual field defects with or without a combined low visual acuity. The use of an individual practical fitness-to-drive test is advised.
Computational implications of activity-dependent neuronal processes
NASA Astrophysics Data System (ADS)
Goldman, Mark Steven
Synapses, the connections between neurons, often fail to transmit a large percentage of the action potentials that they receive. I describe several models of synaptic transmission at a single stochastic synapse with an activity-dependent probability of transmission and demonstrate how synaptic transmission failures may increase the efficiency with which a synapse transmits information. Spike trains in the visual cortex of freely viewing monkeys have positive auto correlations that are indicative of a redundant representation of the information they contain. I show how a synapse with activity-dependent transmission failures modeled after those occurring in visual cortical synapses can remove this redundancy by transmitting a decorrelated subset of the spike trains it receives. I suggest that redundancy reduction at individual synapses saves synaptic resources while increasing the sensitivity of the postsynaptic neuron to information arriving along many inputs. For a neuron receiving input from many decorrelating synapses, my analysis leads to a prediction of the number of visual inputs to a neuron and the cross-correlations between these inputs and suggests that the time scale of synaptic dynamics observed in sensory areas corresponds to a fundamental time scale for processing sensory information. Systems with activity-dependent changes in their parameters, or plasticity, often display a wide variability in their individual components that belies the stability of their function, Motivated by experiments demonstrating that identified neurons with stereotyped function can have a large variability in the densities of their ion channels, or ionic conductances, I build a conductance-based model of a single neuron. The neuron's firing activity is relatively insensitive to changes in certain combinations of conductances, but markedly sensitive to changes in other combinations. Using a combined modeling and experimental approach, I show that neuromodulators and regulatory processes target sensitive combinations of conductances. I suggest that the variability observed in conductance measurements occurs along insensitive combinations of conductances and could result from homeostatic processes that allow the neuron's conductances to drift without triggering activity- dependent feedback mechanisms. These results together suggest that plastic systems may have a high degree of flexibility and variability in their components without a loss of robustness in their response properties.
Visual Analytics of Surveillance Data on Foodborne Vibriosis, United States, 1973–2010
Sims, Jennifer N.; Isokpehi, Raphael D.; Cooper, Gabrielle A.; Bass, Michael P.; Brown, Shyretha D.; St John, Alison L.; Gulig, Paul A.; Cohly, Hari H.P.
2011-01-01
Foodborne illnesses caused by microbial and chemical contaminants in food are a substantial health burden worldwide. In 2007, human vibriosis (non-cholera Vibrio infections) became a notifiable disease in the United States. In addition, Vibrio species are among the 31 major known pathogens transmitted through food in the United States. Diverse surveillance systems for foodborne pathogens also track outbreaks, illnesses, hospitalization and deaths due to non-cholera vibrios. Considering the recognition of vibriosis as a notifiable disease in the United States and the availability of diverse surveillance systems, there is a need for the development of easily deployed visualization and analysis approaches that can combine diverse data sources in an interactive manner. Current efforts to address this need are still limited. Visual analytics is an iterative process conducted via visual interfaces that involves collecting information, data preprocessing, knowledge representation, interaction, and decision making. We have utilized public domain outbreak and surveillance data sources covering 1973 to 2010, as well as visual analytics software to demonstrate integrated and interactive visualizations of data on foodborne outbreaks and surveillance of Vibrio species. Through the data visualization, we were able to identify unique patterns and/or novel relationships within and across datasets regarding (i) causative agent; (ii) foodborne outbreaks and illness per state; (iii) location of infection; (iv) vehicle (food) of infection; (v) anatomical site of isolation of Vibrio species; (vi) patients and complications of vibriosis; (vii) incidence of laboratory-confirmed vibriosis and V. parahaemolyticus outbreaks. The additional use of emerging visual analytics approaches for interaction with data on vibriosis, including non-foodborne related disease, can guide disease control and prevention as well as ongoing outbreak investigations. PMID:22174586
Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.
Loria, Tristan; de Grosbois, John; Tremblay, Luc
2016-09-01
At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.
Figure and Ground in the Visual Cortex: V2 Combines Stereoscopic Cues with Gestalt Rules
Qiu, Fangtu T.; von der Heydt, Rüdiger
2006-01-01
Figure-ground organization is a process by which the visual system identifies some image regions as foreground and others as background, inferring three-dimensional (3D) layout from 2D displays. A recent study reported that edge responses of neurons in area V2 are selective for side-of-figure, suggesting that figure-ground organization is encoded in the contour signals (border-ownership coding). Here we show that area V2 combines two strategies of computation, one that exploits binocular stereoscopic information for the definition of local depth order, and another that exploits the global configuration of contours (gestalt factors). These are combined in single neurons so that the ‘near’ side of the preferred 3D edge generally coincides with the preferred side-of-figure in 2D displays. Thus, area V2 represents the borders of 2D figures as edges of surfaces, as if the figures were objects in 3D space. Even in 3D displays gestalt factors influence the responses and can enhance or null the stereoscopic depth information. PMID:15996555
Figure and ground in the visual cortex: v2 combines stereoscopic cues with gestalt rules.
Qiu, Fangtu T; von der Heydt, Rüdiger
2005-07-07
Figure-ground organization is a process by which the visual system identifies some image regions as foreground and others as background, inferring 3D layout from 2D displays. A recent study reported that edge responses of neurons in area V2 are selective for side-of-figure, suggesting that figure-ground organization is encoded in the contour signals (border ownership coding). Here, we show that area V2 combines two strategies of computation, one that exploits binocular stereoscopic information for the definition of local depth order, and another that exploits the global configuration of contours (Gestalt factors). These are combined in single neurons so that the "near" side of the preferred 3D edge generally coincides with the preferred side-of-figure in 2D displays. Thus, area V2 represents the borders of 2D figures as edges of surfaces, as if the figures were objects in 3D space. Even in 3D displays, Gestalt factors influence the responses and can enhance or null the stereoscopic depth information.
Optical countermeasures against CLOS weapon systems
NASA Astrophysics Data System (ADS)
Toet, Alexander; Benoist, Koen W.; van Lingen, Joost N. J.; Schleijpen, H. Ric M. A.
2013-10-01
There are many weapon systems in which a human operator acquires a target, tracks it and designates it. Optical countermeasures against this type of systems deny the operator the possibility to fulfill this visual task. We describe the different effects that result from stimulation of the human visual system with high intensity (visible) light, and the associated potential operational impact. Of practical use are flash blindness, where an intense flash of light produces a temporary "blind-spot" in (part of) the visual field, flicker distraction, where strong intensity and/or color changes at a discomfortable frequency are produced, and disability glare where a source of light leads to contrast reduction. Hence there are three possibilities to disrupt the visual task of an operator with optical countermeasures such as flares or lasers or a combination of these; namely, by an intense flash of light, by an annoying light flicker or by a glare source. A variety of flares for this purpose is now available or under development: high intensity flash flares, continuous burning flares or strobe flares which have an oscillating intensity. The use of flare arrays seems particularly promising as an optical countermeasure. Lasers are particularly suited to interfere with human vision, because they can easily be varied in intensity, color and size, but they have to be directed at the (human) target, and issues like pointing and eye-safety have to be taken into account. Here we discuss the design issues and the operational impact of optical countermeasures against human operators.
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
SCSODC: Integrating Ocean Data for Visualization Sharing and Application
NASA Astrophysics Data System (ADS)
Xu, C.; Li, S.; Wang, D.; Xie, Q.
2014-02-01
The South China Sea Ocean Data Center (SCSODC) was founded in 2010 in order to improve collecting and managing of ocean data of the South China Sea Institute of Oceanology (SCSIO). The mission of SCSODC is to ensure the long term scientific stewardship of ocean data, information and products - collected through research groups, monitoring stations and observation cruises - and to facilitate the efficient use and distribution to possible users. However, data sharing and applications were limited due to the characteristics of distribution and heterogeneity that made it difficult to integrate the data. To surmount those difficulties, the Data Sharing System has been developed by the SCSODC using the most appropriate information management and information technology. The Data Sharing System uses open standards and tools to promote the capability to integrate ocean data and to interact with other data portals or users and includes a full range of processes such as data discovery, evaluation and access combining C/S and B/S mode. It provides a visualized management interface for the data managers and a transparent and seamless data access and application environment for users. Users are allowed to access data using the client software and to access interactive visualization application interface via a web browser. The architecture, key technologies and functionality of the system are discussed briefly in this paper. It is shown that the system of SCSODC is able to implement web visualization sharing and seamless access to ocean data in a distributed and heterogeneous environment.
Elements of Success: WorkReady Philadelphia Program Year 2011-2012 Report
ERIC Educational Resources Information Center
Philadelphia Youth Network, 2012
2012-01-01
What does it take to deliver WorkReady Philadelphia's high-quality career-connected programming? In short, it's all about the "elements"--those essential components of the system that combine to produce success for young people. This 2011-12 WorkReady report reinforces this theme by using visual aspects of the "Periodic Table of…
Target Selection by the Frontal Cortex during Coordinated Saccadic and Smooth Pursuit Eye Movements
ERIC Educational Resources Information Center
Srihasam, Krishna; Bullock, Daniel; Grossberg, Stephen
2009-01-01
Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth-pursuit eye movements. In particular, the saccadic and smooth-pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do…
Interactive Electronic Technical Manuals (IETMs) Annotated Bibliography
2002-10-22
translated from their graphical counterparts. This paper examines a set of challenging issues facing speech interface designers and describes approaches...spreading network, combined with visual design techniques, such as typography , color, and transparency, enables the system to fluidly respond to...However, most research and design guidelines address typography and color separately without considering their spatial context or their function as
NASA Astrophysics Data System (ADS)
Schiltz, Holly Kristine
Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors' modeled visualization artifacts had on students. No patterns emerged from the passive observation of visualization artifacts in lecture or recitation, but the need to elicit visual information from students was made clear. Deconstruction proved to be a valuable method for instruction and assessment of visual information. Three strategies for using deconstruction in teaching were distilled from the lessons and observations of the student focus groups: begin with observations of what is given in an image and what it's composed of, identify the relationships between components to find additional operations in different environments about the molecule, and deconstructing steps of challenging questions can reveal mistakes. An intervention was developed to teach students to use deconstruction and verbalization to analyze complex visualization tasks and employ the principles of the theoretical framework. The activities were scaffolded to introduce increasingly challenging concepts to students, but also support them as they learned visually demanding chemistry concepts. Several themes were observed in the analysis of the visualization activities. Students used deconstruction by documenting which parts of the images were useful for interpretation of the visual. Students identified valid patterns and rules within the images, which signified understanding of arrangement of information presented in the representation. Successful strategy communication was identified when students documented personal strategies that allowed them to complete the activity tasks. Finally, students demonstrated the ability to extend symmetry skills to advanced applications they had not previously seen. This work shows how the use of deconstruction and verbalization may have a great impact on how students master difficult topics and combined, they offer students a powerful strategy to approach visually demanding chemistry problems and to the instructor a unique insight to mentally constructed strategies.
Contrast statistics for foveated visual systems: fixation selection by minimizing contrast entropy
NASA Astrophysics Data System (ADS)
Raj, Raghu; Geisler, Wilson S.; Frazor, Robert A.; Bovik, Alan C.
2005-10-01
The human visual system combines a wide field of view with a high-resolution fovea and uses eye, head, and body movements to direct the fovea to potentially relevant locations in the visual scene. This strategy is sensible for a visual system with limited neural resources. However, for this strategy to be effective, the visual system needs sophisticated central mechanisms that efficiently exploit the varying spatial resolution of the retina. To gain insight into some of the design requirements of these central mechanisms, we have analyzed the effects of variable spatial resolution on local contrast in 300 calibrated natural images. Specifically, for each retinal eccentricity (which produces a certain effective level of blur), and for each value of local contrast observed at that eccentricity, we measured the probability distribution of the local contrast in the unblurred image. These conditional probability distributions can be regarded as posterior probability distributions for the ``true'' unblurred contrast, given an observed contrast at a given eccentricity. We find that these conditional probability distributions are adequately described by a few simple formulas. To explore how these statistics might be exploited by central perceptual mechanisms, we consider the task of selecting successive fixation points, where the goal on each fixation is to maximize total contrast information gained about the image (i.e., minimize total contrast uncertainty). We derive an entropy minimization algorithm and find that it performs optimally at reducing total contrast uncertainty and that it also works well at reducing the mean squared error between the original image and the image reconstructed from the multiple fixations. Our results show that measurements of local contrast alone could efficiently drive the scan paths of the eye when the goal is to gain as much information about the spatial structure of a scene as possible.
Richman, Nadia I.; Gibbons, James M.; Turvey, Samuel T.; Akamatsu, Tomonari; Ahmed, Benazir; Mahabub, Emile; Smith, Brian D.; Jones, Julia P. G.
2014-01-01
Detection of animals during visual surveys is rarely perfect or constant, and failure to account for imperfect detectability affects the accuracy of abundance estimates. Freshwater cetaceans are among the most threatened group of mammals, and visual surveys are a commonly employed method for estimating population size despite concerns over imperfect and unquantified detectability. We used a combined visual-acoustic survey to estimate detectability of Ganges River dolphins (Platanista gangetica gangetica) in four waterways of southern Bangladesh. The combined visual-acoustic survey resulted in consistently higher detectability than a single observer-team visual survey, thereby improving power to detect trends. Visual detectability was particularly low for dolphins close to meanders where these habitat features temporarily block the view of the preceding river surface. This systematic bias in detectability during visual-only surveys may lead researchers to underestimate the importance of heavily meandering river reaches. Although the benefits of acoustic surveys are increasingly recognised for marine cetaceans, they have not been widely used for monitoring abundance of freshwater cetaceans due to perceived costs and technical skill requirements. We show that acoustic surveys are in fact a relatively cost-effective approach for surveying freshwater cetaceans, once it is acknowledged that methods that do not account for imperfect detectability are of limited value for monitoring. PMID:24805782
Richman, Nadia I; Gibbons, James M; Turvey, Samuel T; Akamatsu, Tomonari; Ahmed, Benazir; Mahabub, Emile; Smith, Brian D; Jones, Julia P G
2014-01-01
Detection of animals during visual surveys is rarely perfect or constant, and failure to account for imperfect detectability affects the accuracy of abundance estimates. Freshwater cetaceans are among the most threatened group of mammals, and visual surveys are a commonly employed method for estimating population size despite concerns over imperfect and unquantified detectability. We used a combined visual-acoustic survey to estimate detectability of Ganges River dolphins (Platanista gangetica gangetica) in four waterways of southern Bangladesh. The combined visual-acoustic survey resulted in consistently higher detectability than a single observer-team visual survey, thereby improving power to detect trends. Visual detectability was particularly low for dolphins close to meanders where these habitat features temporarily block the view of the preceding river surface. This systematic bias in detectability during visual-only surveys may lead researchers to underestimate the importance of heavily meandering river reaches. Although the benefits of acoustic surveys are increasingly recognised for marine cetaceans, they have not been widely used for monitoring abundance of freshwater cetaceans due to perceived costs and technical skill requirements. We show that acoustic surveys are in fact a relatively cost-effective approach for surveying freshwater cetaceans, once it is acknowledged that methods that do not account for imperfect detectability are of limited value for monitoring.
NASA Astrophysics Data System (ADS)
Renaud, Olivier; Heintzmann, Rainer; Sáez-Cirión, Asier; Schnelle, Thomas; Mueller, Torsten; Shorte, Spencer
2007-02-01
Three dimensional imaging provides high-content information from living intact biology, and can serve as a visual screening cue. In the case of single cell imaging the current state of the art uses so-called "axial through-stacking". However, three-dimensional axial through-stacking requires that the object (i.e. a living cell) be adherently stabilized on an optically transparent surface, usually glass; evidently precluding use of cells in suspension. Aiming to overcome this limitation we present here the utility of dielectric field trapping of single cells in three-dimensional electrode cages. Our approach allows gentle and precise spatial orientation and vectored rotation of living, non-adherent cells in fluid suspension. Using various modes of widefield, and confocal microscope imaging we show how so-called "microrotation" can provide a unique and powerful method for multiple point-of-view (three-dimensional) interrogation of intact living biological micro-objects (e.g. single-cells, cell aggregates, and embryos). Further, we show how visual screening by micro-rotation imaging can be combined with micro-fluidic sorting, allowing selection of rare phenotype targets from small populations of cells in suspension, and subsequent one-step single cell cloning (with high-viability). Our methodology combining high-content 3D visual screening with one-step single cell cloning, will impact diverse paradigms, for example cytological and cytogenetic analysis on haematopoietic stem cells, blood cells including lymphocytes, and cancer cells.
Lesion classification using clinical and visual data fusion by multiple kernel learning
NASA Astrophysics Data System (ADS)
Kisilev, Pavel; Hashoul, Sharbell; Walach, Eugene; Tzadok, Asaf
2014-03-01
To overcome operator dependency and to increase diagnosis accuracy in breast ultrasound (US), a lot of effort has been devoted to developing computer-aided diagnosis (CAD) systems for breast cancer detection and classification. Unfortunately, the efficacy of such CAD systems is limited since they rely on correct automatic lesions detection and localization, and on robustness of features computed based on the detected areas. In this paper we propose a new approach to boost the performance of a Machine Learning based CAD system, by combining visual and clinical data from patient files. We compute a set of visual features from breast ultrasound images, and construct the textual descriptor of patients by extracting relevant keywords from patients' clinical data files. We then use the Multiple Kernel Learning (MKL) framework to train SVM based classifier to discriminate between benign and malignant cases. We investigate different types of data fusion methods, namely, early, late, and intermediate (MKL-based) fusion. Our database consists of 408 patient cases, each containing US images, textual description of complaints and symptoms filled by physicians, and confirmed diagnoses. We show experimentally that the proposed MKL-based approach is superior to other classification methods. Even though the clinical data is very sparse and noisy, its MKL-based fusion with visual features yields significant improvement of the classification accuracy, as compared to the image features only based classifier.
Informing Regional Water-Energy-Food Nexus with System Analysis and Interactive Visualizations
NASA Astrophysics Data System (ADS)
Yang, Y. C. E.; Wi, S.
2016-12-01
Communicating scientific results to non-technical practitioners is challenging due to their differing interests, concerns and agendas. It is further complicated by the growing number of relevant factors that need to be considered, such as climate change and demographic dynamic. Visualization is an effective method for the scientific community to disseminate results, and it represents an opportunity for the future of water resources systems analysis (WRSA). This study demonstrates an intuitive way to communicate WRSA results to practitioners using interactive web-based visualization tools developed by the JavaScript library: Data-Driven Documents (D3) with a case study in Great Ruaha River of Tanzania. The decreasing trend of streamflow during the last decades in the region highlights the need of assessing the water usage competition between agricultural production, energy generation, and ecosystem service. Our team conduct the advance water resources systems analysis to inform policy that will affect the water-energy-food nexus. Modeling results are presented in the web-based visualization tools and allow non-technical practitioners to brush the graph directly (e. g. Figure 1). The WRSA suggests that no single measure can completely resolve the water competition. A combination of measures, each of which is acceptable from a social and economic perspective, and accepting that zero flows cannot be totally eliminated during dry years in the wetland, are likely to be the best way forward.
Gallagher, Rosemary; Damodaran, Harish; Werner, William G; Powell, Wendy; Deutsch, Judith E
2016-08-19
Evidence based virtual environments (VEs) that incorporate compensatory strategies such as cueing may change motor behavior and increase exercise intensity while also being engaging and motivating. The purpose of this study was to determine if persons with Parkinson's disease and aged matched healthy adults responded to auditory and visual cueing embedded in a bicycling VE as a method to increase exercise intensity. We tested two groups of participants, persons with Parkinson's disease (PD) (n = 15) and age-matched healthy adults (n = 13) as they cycled on a stationary bicycle while interacting with a VE. Participants cycled under two conditions: auditory cueing (provided by a metronome) and visual cueing (represented as central road markers in the VE). The auditory condition had four trials in which auditory cues or the VE were presented alone or in combination. The visual condition had five trials in which the VE and visual cue rate presentation was manipulated. Data were analyzed by condition using factorial RMANOVAs with planned t-tests corrected for multiple comparisons. There were no differences in pedaling rates between groups for both the auditory and visual cueing conditions. Persons with PD increased their pedaling rate in the auditory (F 4.78, p = 0.029) and visual cueing (F 26.48, p < 0.000) conditions. Age-matched healthy adults also increased their pedaling rate in the auditory (F = 24.72, p < 0.000) and visual cueing (F = 40.69, p < 0.000) conditions. Trial-to-trial comparisons in the visual condition in age-matched healthy adults showed a step-wise increase in pedaling rate (p = 0.003 to p < 0.000). In contrast, persons with PD increased their pedaling rate only when explicitly instructed to attend to the visual cues (p < 0.000). An evidenced based cycling VE can modify pedaling rate in persons with PD and age-matched healthy adults. Persons with PD required attention directed to the visual cues in order to obtain an increase in cycling intensity. The combination of the VE and auditory cues was neither additive nor interfering. These data serve as preliminary evidence that embedding auditory and visual cues to alter cycling speed in a VE as method to increase exercise intensity that may promote fitness.
Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.
Barbosa, Sara; Pires, Gabriel; Nunes, Urbano
2016-03-01
Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.
Towards Real Time Diagnostics of Hybrid Welding Laser/GMAW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timothy Mcjunkin; Dennis C. Kunerth; Corrie Nichol
2013-07-01
Methods are currently being developed towards a more robust system real time feedback in the high throughput process combining laser welding with gas metal arc welding. A combination of ultrasonic, eddy current, electronic monitoring, and visual techniques are being applied to the welding process. Initial simulation and bench top evaluation of proposed real time techniques on weld samples are presented along with the concepts to apply the techniques concurrently to the weld process. Consideration for the eventual code acceptance of the methods and system are also being researched as a component of this project. The goal is to detect defectsmore » or precursors to defects and correct when possible during the weld process.« less
Towards real time diagnostics of Hybrid Welding Laser/GMAW
DOE Office of Scientific and Technical Information (OSTI.GOV)
McJunkin, T. R.; Kunerth, D. C.; Nichol, C. I.
2014-02-18
Methods are currently being developed towards a more robust system real time feedback in the high throughput process combining laser welding with gas metal arc welding. A combination of ultrasonic, eddy current, electronic monitoring, and visual techniques are being applied to the welding process. Initial simulation and bench top evaluation of proposed real time techniques on weld samples are presented along with the concepts to apply the techniques concurrently to the weld process. Consideration for the eventual code acceptance of the methods and system are also being researched as a component of this project. The goal is to detect defectsmore » or precursors to defects and correct when possible during the weld process.« less
Towards real time diagnostics of Hybrid Welding Laser/GMAW
NASA Astrophysics Data System (ADS)
McJunkin, T. R.; Kunerth, D. C.; Nichol, C. I.; Todorov, E.; Levesque, S.
2014-02-01
Methods are currently being developed towards a more robust system real time feedback in the high throughput process combining laser welding with gas metal arc welding. A combination of ultrasonic, eddy current, electronic monitoring, and visual techniques are being applied to the welding process. Initial simulation and bench top evaluation of proposed real time techniques on weld samples are presented along with the concepts to apply the techniques concurrently to the weld process. Consideration for the eventual code acceptance of the methods and system are also being researched as a component of this project. The goal is to detect defects or precursors to defects and correct when possible during the weld process.
Hilbert, Sebastian; Sommer, Philipp; Gutberlet, Matthias; Gaspar, Thomas; Foldyna, Borek; Piorkowski, Christopher; Weiss, Steffen; Lloyd, Thomas; Schnackenburg, Bernhard; Krueger, Sascha; Fleiter, Christian; Paetsch, Ingo; Jahnke, Cosima; Hindricks, Gerhard; Grothoff, Matthias
2016-04-01
Recently cardiac magnetic resonance (CMR) imaging has been found feasible for the visualization of the underlying substrate for cardiac arrhythmias as well as for the visualization of cardiac catheters for diagnostic and ablation procedures. Real-time CMR-guided cavotricuspid isthmus ablation was performed in a series of six patients using a combination of active catheter tracking and catheter visualization using real-time MR imaging. Cardiac magnetic resonance utilizing a 1.5 T system was performed in patients under deep propofol sedation. A three-dimensional-whole-heart sequence with navigator technique and a fast automated segmentation algorithm was used for online segmentation of all cardiac chambers, which were thereafter displayed on a dedicated image guidance platform. In three out of six patients complete isthmus block could be achieved in the MR scanner, two of these patients did not need any additional fluoroscopy. In the first patient technical issues called for a completion of the procedure in a conventional laboratory, in another two patients the isthmus was partially blocked by magnetic resonance imaging (MRI)-guided ablation. The mean procedural time for the MR procedure was 109 ± 58 min. The intubation of the CS was performed within a mean time of 2.75 ± 2.21 min. Total fluoroscopy time for completion of the isthmus block ranged from 0 to 7.5 min. The combination of active catheter tracking and passive real-time visualization in CMR-guided electrophysiologic (EP) studies using advanced interventional hardware and software was safe and enabled efficient navigation, mapping, and ablation. These cases demonstrate significant progress in the development of MR-guided EP procedures. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Visual perception of axes of head rotation
Arnoldussen, D. M.; Goossens, J.; van den Berg, A. V.
2013-01-01
Registration of ego-motion is important to accurately navigate through space. Movements of the head and eye relative to space are registered through the vestibular system and optical flow, respectively. Here, we address three questions concerning the visual registration of self-rotation. (1) Eye-in-head movements provide a link between the motion signals received by sensors in the moving eye and sensors in the moving head. How are these signals combined into an ego-rotation percept? We combined optic flow of simulated forward and rotational motion of the eye with different levels of eye-in-head rotation for a stationary head. We dissociated simulated gaze rotation and head rotation by different levels of eye-in-head pursuit. We found that perceived rotation matches simulated head- not gaze-rotation. This rejects a model for perceived self-rotation that relies on the rotation of the gaze line. Rather, eye-in-head signals serve to transform the optic flow's rotation information, that specifies rotation of the scene relative to the eye, into a rotation relative to the head. This suggests that transformed visual self-rotation signals may combine with vestibular signals. (2) Do transformed visual self-rotation signals reflect the arrangement of the semi-circular canals (SCC)? Previously, we found sub-regions within MST and V6+ that respond to the speed of the simulated head rotation. Here, we re-analyzed those Blood oxygenated level-dependent (BOLD) signals for the presence of a spatial dissociation related to the axes of visually simulated head rotation, such as have been found in sub-cortical regions of various animals. Contrary, we found a rather uniform BOLD response to simulated rotation along the three SCC axes. (3) We investigated if subject's sensitivity to the direction of the head rotation axis shows SCC axes specifcity. We found that sensitivity to head rotation is rather uniformly distributed, suggesting that in human cortex, visuo-vestibular integration is not arranged into the SCC frame. PMID:23919087
VPython: Writing Real-time 3D Physics Programs
NASA Astrophysics Data System (ADS)
Chabay, Ruth
2001-06-01
VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.
OpinionFlow: Visual Analysis of Opinion Diffusion on Social Media.
Wu, Yingcai; Liu, Shixia; Yan, Kai; Liu, Mengchen; Wu, Fangzhao
2014-12-01
It is important for many different applications such as government and business intelligence to analyze and explore the diffusion of public opinions on social media. However, the rapid propagation and great diversity of public opinions on social media pose great challenges to effective analysis of opinion diffusion. In this paper, we introduce a visual analysis system called OpinionFlow to empower analysts to detect opinion propagation patterns and glean insights. Inspired by the information diffusion model and the theory of selective exposure, we develop an opinion diffusion model to approximate opinion propagation among Twitter users. Accordingly, we design an opinion flow visualization that combines a Sankey graph with a tailored density map in one view to visually convey diffusion of opinions among many users. A stacked tree is used to allow analysts to select topics of interest at different levels. The stacked tree is synchronized with the opinion flow visualization to help users examine and compare diffusion patterns across topics. Experiments and case studies on Twitter data demonstrate the effectiveness and usability of OpinionFlow.
A framework for stochastic simulations and visualization of biological electron-transfer dynamics
NASA Astrophysics Data System (ADS)
Nakano, C. Masato; Byun, Hye Suk; Ma, Heng; Wei, Tao; El-Naggar, Mohamed Y.
2015-08-01
Electron transfer (ET) dictates a wide variety of energy-conversion processes in biological systems. Visualizing ET dynamics could provide key insight into understanding and possibly controlling these processes. We present a computational framework named VizBET to visualize biological ET dynamics, using an outer-membrane Mtr-Omc cytochrome complex in Shewanella oneidensis MR-1 as an example. Starting from X-ray crystal structures of the constituent cytochromes, molecular dynamics simulations are combined with homology modeling, protein docking, and binding free energy computations to sample the configuration of the complex as well as the change of the free energy associated with ET. This information, along with quantum-mechanical calculations of the electronic coupling, provides inputs to kinetic Monte Carlo (KMC) simulations of ET dynamics in a network of heme groups within the complex. Visualization of the KMC simulation results has been implemented as a plugin to the Visual Molecular Dynamics (VMD) software. VizBET has been used to reveal the nature of ET dynamics associated with novel nonequilibrium phase transitions in a candidate configuration of the Mtr-Omc complex due to electron-electron interactions.
Visualization techniques for computer network defense
NASA Astrophysics Data System (ADS)
Beaver, Justin M.; Steed, Chad A.; Patton, Robert M.; Cui, Xiaohui; Schultz, Matthew
2011-06-01
Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.
NASA Astrophysics Data System (ADS)
Knight, Claire; Munro, Malcolm
2001-07-01
Distributed component based systems seem to be the immediate future for software development. The use of such techniques, object oriented languages, and the combination with ever more powerful higher-level frameworks has led to the rapid creation and deployment of such systems to cater for the demand of internet and service driven business systems. This diversity of solution through both components utilised and the physical/virtual locations of those components can provide powerful resolutions to the new demand. The problem lies in the comprehension and maintenance of such systems because they then have inherent uncertainty. The components combined at any given time for a solution may differ, the messages generated, sent, and/or received may differ, and the physical/virtual locations cannot be guaranteed. Trying to account for this uncertainty and to build in into analysis and comprehension tools is important for both development and maintenance activities.
Zhen Qin; Bin Zhang; Ning Hu; Ping Wang
2015-01-01
The mammalian gustatory system is acknowledged as one of the most valid chemosensing systems. The sense of taste particularly provides critical information about ingestion of toxic and noxious chemicals. Thus the potential of utilizing rats' gustatory system is investigated in detecting sapid substances. By recording electrical activities of neurons in gustatory cortex, a novel bioelectronic tongue system is developed in combination with brain-machine interface technology. Features are extracted in both spikes and local field potentials. By visualizing these features, classification is performed and the responses to different tastants can be prominently separated from each other. The results suggest that this in vivo bioelectronic tongue is capable of detecting tastants and will provide a promising platform for potential applications in evaluating palatability of food and beverages.
Multisensory Self-Motion Compensation During Object Trajectory Judgments
Dokka, Kalpana; MacNeilage, Paul R.; DeAngelis, Gregory C.; Angelaki, Dora E.
2015-01-01
Judging object trajectory during self-motion is a fundamental ability for mobile organisms interacting with their environment. This fundamental ability requires the nervous system to compensate for the visual consequences of self-motion in order to make accurate judgments, but the mechanisms of this compensation are poorly understood. We comprehensively examined both the accuracy and precision of observers' ability to judge object trajectory in the world when self-motion was defined by vestibular, visual, or combined visual–vestibular cues. Without decision feedback, subjects demonstrated no compensation for self-motion that was defined solely by vestibular cues, partial compensation (47%) for visually defined self-motion, and significantly greater compensation (58%) during combined visual–vestibular self-motion. With decision feedback, subjects learned to accurately judge object trajectory in the world, and this generalized to novel self-motion speeds. Across conditions, greater compensation for self-motion was associated with decreased precision of object trajectory judgments, indicating that self-motion compensation comes at the cost of reduced discriminability. Our findings suggest that the brain can flexibly represent object trajectory relative to either the observer or the world, but a world-centered representation comes at the cost of decreased precision due to the inclusion of noisy self-motion signals. PMID:24062317
NaviCell Web Service for network-based data visualization.
Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P A; Barillot, Emmanuel; Zinovyev, Andrei
2015-07-01
Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of 'omics' data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NaviCell Web Service for network-based data visualization
Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P. A.; Barillot, Emmanuel; Zinovyev, Andrei
2015-01-01
Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of ‘omics’ data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. PMID:25958393
Mui, Amanda M.; Yang, Victoria; Aung, Moe H.; Fu, Jieming; Adekunle, Adewumi N.; Prall, Brian C.; Sidhu, Curran S.; Park, Han na; Boatright, Jeffrey H.; Iuvone, P. Michael
2018-01-01
Visual experience during the critical period modulates visual development such that deprivation causes visual impairments while stimulation induces enhancements. This study aimed to determine whether visual stimulation in the form of daily optomotor response (OMR) testing during the mouse critical period (1) improves aspects of visual function, (2) involves retinal mechanisms and (3) is mediated by brain derived neurotrophic factor (BDNF) and dopamine (DA) signaling pathways. We tested spatial frequency thresholds in C57BL/6J mice daily from postnatal days 16 to 23 (P16 to P23) using OMR testing. Daily OMR-treated mice were compared to littermate controls that were placed in the OMR chamber without moving gratings. Contrast sensitivity thresholds, electroretinograms (ERGs), visual evoked potentials, and pattern ERGs were acquired at P21. To determine the role of BDNF signaling, a TrkB receptor antagonist (ANA-12) was systemically injected 2 hours prior to OMR testing in another cohort of mice. BDNF immunohistochemistry was performed on retina and brain sections. Retinal DA levels were measured using high-performance liquid chromatography. Daily OMR testing enhanced spatial frequency thresholds and contrast sensitivity compared to controls. OMR-treated mice also had improved rod-driven ERG oscillatory potential response times, greater BDNF immunoreactivity in the retinal ganglion cell layer, and increased retinal DA content compared to controls. VEPs and pattern ERGs were unchanged. Systemic delivery of ANA-12 attenuated OMR-induced visual enhancements. Daily OMR testing during the critical period leads to general visual function improvements accompanied by increased DA and BDNF in the retina, with this process being requisitely mediated by TrkB activation. These results suggest that novel combination therapies involving visual stimulation and using both behavioral and molecular approaches may benefit degenerative retinal diseases or amblyopia. PMID:29408880
Reliable fusion of control and sensing in intelligent machines. Thesis
NASA Technical Reports Server (NTRS)
Mcinroy, John E.
1991-01-01
Although robotics research has produced a wealth of sophisticated control and sensing algorithms, very little research has been aimed at reliably combining these control and sensing strategies so that a specific task can be executed. To improve the reliability of robotic systems, analytic techniques are developed for calculating the probability that a particular combination of control and sensing algorithms will satisfy the required specifications. The probability can then be used to assess the reliability of the design. An entropy formulation is first used to quickly eliminate designs not capable of meeting the specifications. Next, a framework for analyzing reliability based on the first order second moment methods of structural engineering is proposed. To ensure performance over an interval of time, lower bounds on the reliability of meeting a set of quadratic specifications with a Gaussian discrete time invariant control system are derived. A case study analyzing visual positioning in robotic system is considered. The reliability of meeting timing and positioning specifications in the presence of camera pixel truncation, forward and inverse kinematic errors, and Gaussian joint measurement noise is determined. This information is used to select a visual sensing strategy, a kinematic algorithm, and a discrete compensator capable of accomplishing the desired task. Simulation results using PUMA 560 kinematic and dynamic characteristics are presented.
Savel, Thomas G; Bronstein, Alvin; Duck, William; Rhodes, M Barry; Lee, Brian; Stinn, John; Worthen, Katherine
2010-01-01
Real-time surveillance systems are valuable for timely response to public health emergencies. It has been challenging to leverage existing surveillance systems in state and local communities, and, using a centralized architecture, add new data sources and analytical capacity. Because this centralized model has proven to be difficult to maintain and enhance, the US Centers for Disease Control and Prevention (CDC) has been examining the ability to use a federated model based on secure web services architecture, with data stewardship remaining with the data provider. As a case study for this approach, the American Association of Poison Control Centers and the CDC extended an existing data warehouse via a secure web service, and shared aggregate clinical effects and case counts data by geographic region and time period. To visualize these data, CDC developed a web browser-based interface, Quicksilver, which leveraged the Google Maps API and Flot, a javascript plotting library. Two iterations of the NPDS web service were completed in 12 weeks. The visualization client, Quicksilver, was developed in four months. This implementation of web services combined with a visualization client represents incremental positive progress in transitioning national data sources like BioSense and NPDS to a federated data exchange model. Quicksilver effectively demonstrates how the use of secure web services in conjunction with a lightweight, rapidly deployed visualization client can easily integrate isolated data sources for biosurveillance.
NASA Astrophysics Data System (ADS)
Sudiartha, IKG; Catur Bawa, IGNB
2018-01-01
Information can not be separated from the social life of the community, especially in the world of education. One of the information fields is academic calendar information, activity agenda, announcement and campus activity news. In line with technological developments, text-based information is becoming obsolete. For that need creativity to present information more quickly, accurately and interesting by exploiting the development of digital technology and internet. In this paper will be developed applications for the provision of information in the form of visual display, applied to computer network system with multimedia applications. Network-based applications provide ease in updating data through internet services, attractive presentations with multimedia support. The application “Networking Visual Display Information Unit” can be used as a medium that provides information services for students and academic employee more interesting and ease in updating information than the bulletin board. The information presented in the form of Running Text, Latest Information, Agenda, Academic Calendar and Video provide an interesting presentation and in line with technological developments at the Politeknik Negeri Bali. Through this research is expected to create software “Networking Visual Display Information Unit” with optimal bandwidth usage by combining local data sources and data through the network. This research produces visual display design with optimal bandwidth usage and application in the form of supporting software.
NASA Astrophysics Data System (ADS)
Omine, Yukio; Sakai, Masaki; Aoki, Yoshimitsu; Takagi, Mikio
2004-10-01
In recent years, crisis management in response to terrorist attacks and natural disasters, as well as accelerating rescue operations has become an important issue. Rescue operations greatly influence human lives, and require the ability to accurately and swiftly communicate information as well as assess the status of the site. Currently, considerable amount of research is being conducted for assisting rescue operations, with the application of various engineering techniques such as information technology and radar technology. In the present research, we believe that assessing the status of the site is most crucial in rescue and firefighting operations at a fire disaster site, and aim to visualize the space that is smothered with dense smoke. In a space filled with dense smoke, where visual or infrared sensing techniques are not feasible, three-dimensional measurements can be realized using a compact millimeter wave radar device combined with directional information from a gyro sensor. Using these techniques, we construct a system that can build and visualize a three-dimensional geometric model of the space. The final objective is to implement such a system on a wearable computer, which will improve the firefighters' spatial perception, assisting them in the baseline assessment and the decision-making process. In the present paper, we report the results of the basic experiments on three-dimensional measurement and visualization of a space that is smoke free, using a millimeter wave radar.
When apperceptive agnosia is explained by a deficit of primary visual processing.
Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta
2014-03-01
Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Frequency encoded auditory display of the critical tracking task
NASA Technical Reports Server (NTRS)
Stevenson, J.
1984-01-01
The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.
Cameirão, Mónica S; Badia, Sergi Bermúdez i; Duarte, Esther; Frisoli, Antonio; Verschure, Paul F M J
2012-10-01
Although there is strong evidence on the beneficial effects of virtual reality (VR)-based rehabilitation, it is not yet well understood how the different aspects of these systems affect recovery. Consequently, we do not exactly know what features of VR neurorehabilitation systems are decisive in conveying their beneficial effects. To specifically address this issue, we developed 3 different configurations of the same VR-based rehabilitation system, the Rehabilitation Gaming System, using 3 different interface technologies: vision-based tracking, haptics, and a passive exoskeleton. Forty-four patients with chronic stroke were randomly allocated to one of the configurations and used the system for 35 minutes a day for 5 days a week during 4 weeks. Our results revealed significant within-subject improvements at most of the standard clinical evaluation scales for all groups. Specifically we observe that the beneficial effects of VR-based training are modulated by the use/nonuse of compensatory movement strategies and the specific sensorimotor contingencies presented to the user, that is, visual feedback versus combined visual haptic feedback. Our findings suggest that the beneficial effects of VR-based neurorehabilitation systems such as the Rehabilitation Gaming System for the treatment of chronic stroke depend on the specific interface systems used. These results have strong implications for the design of future VR rehabilitation strategies that aim at maximizing functional outcomes and their retention. Clinical Trial Registration- This trial was not registered because it is a small clinical study that evaluates the feasibility of prototype devices.
Nakajima, Yujiro; Kadoya, Noriyuki; Kanai, Takayuki; Ito, Kengo; Sato, Kiyokazu; Dobashi, Suguru; Yamamoto, Takaya; Ishikawa, Yojiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi
2016-07-01
Irregular breathing can influence the outcome of 4D computed tomography imaging and cause artifacts. Visual biofeedback systems associated with a patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches) (representing simpler visual coaching techniques without a guiding waveform) are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching in reducing respiratory irregularities by comparing two respiratory management systems. We collected data from 11 healthy volunteers. Bar and wave models were used as visual biofeedback systems. Abches consisted of a respiratory indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. All coaching techniques improved respiratory variation, compared with free-breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86 and 0.98 ± 0.47 mm for free-breathing, Abches, bar model and wave model, respectively. Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18 and 0.17 ± 0.05 s for free-breathing, Abches, bar model and wave model, respectively. The average reduction in displacement and period RMSE compared with the wave model were 27% and 47%, respectively. For variation in both displacement and period, wave model was superior to the other techniques. Our results showed that visual biofeedback combined with a wave model could potentially provide clinical benefits in respiratory management, although all techniques were able to reduce respiratory irregularities. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Silverstein, Jonathan C; Dech, Fred; Kouchoukos, Philip L
2004-01-01
Radiological volumes are typically reviewed by surgeons using cross-sections and iso-surface reconstructions. Applications that combine collaborative stereo volume visualization with symbolic anatomic information and data fusions would expand surgeons' capabilities in interpretation of data and in planning treatment. Such an application has not been seen clinically. We are developing methods to systematically combine symbolic anatomy (term hierarchies and iso-surface atlases) with patient data using data fusion. We describe our progress toward integrating these methods into our collaborative virtual reality application. The fully combined application will be a feature-rich stereo collaborative volume visualization environment for use by surgeons in which DICOM datasets will self-report underlying anatomy with visual feedback. Using hierarchical navigation of SNOMED-CT anatomic terms integrated with our existing Tele-immersive DICOM-based volumetric rendering application, we will display polygonal representations of anatomic systems on the fly from menus that query a database. The methods and tools involved in this application development are SNOMED-CT, DICOM, VISIBLE HUMAN, volumetric fusion and C++ on a Tele-immersive platform. This application will allow us to identify structures and display polygonal representations from atlas data overlaid with the volume rendering. First, atlas data is automatically translated, rotated, and scaled to the patient data during loading using a public domain volumetric fusion algorithm. This generates a modified symbolic representation of the underlying canonical anatomy. Then, through the use of collision detection or intersection testing of various transparent polygonal representations, the polygonal structures are highlighted into the volumetric representation while the SNOMED names are displayed. Thus, structural names and polygonal models are associated with the visualized DICOM data. This novel juxtaposition of information promises to expand surgeons' abilities to interpret images and plan treatment.
Mellerup, Soren K; Rao, Ying-Li; Amarne, Hazem; Wang, Suning
2016-09-02
Combining a three-coordinated boron (BMes2) moiety with a four-coordinated photochromic organoboron unit leads to a series of new diboron compounds that undergo four-state reversible color switching in response to stimuli of light, heat, and fluoride ions. Thus, these hybrid diboron systems allow both convenient color tuning/switching of such photochromic systems, as well as visual fluoride sensing by color or fluorescent emission color change.
Automatic Detection and Classification of Audio Events for Road Surveillance Applications.
Almaadeed, Noor; Asim, Muhammad; Al-Maadeed, Somaya; Bouridane, Ahmed; Beghdadi, Azeddine
2018-06-06
This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs) to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.
Three-dimensional device characterization by high-speed cinematography
NASA Astrophysics Data System (ADS)
Maier, Claus; Hofer, Eberhard P.
2001-10-01
Testing of micro-electro-mechanical systems (MEMS) for optimization purposes or reliability checks can be supported by device visualization whenever an optical access is available. The difficulty in such an investigation is the short time duration of dynamical phenomena in micro devices. This paper presents a test setup to visualize movements within MEMS in real-time and in two perpendicular directions. A three-dimensional view is achieved by the combination of a commercial high-speed camera system, which allows to take up to 8 images of the same process with a minimum interframe time of 10 ns for the first direction, with a second visualization system consisting of a highly sensitive CCD camera working with a multiple exposure LED illumination in the perpendicular direction. Well synchronized this provides 3-D information which is treated by digital image processing to correct image distortions and to perform the detection of object contours. Symmetric and asymmetric binary collisions of micro drops are chosen as test experiments, featuring coalescence and surface rupture. Another application shown here is the investigation of sprays produced by an atomizer. The second direction of view is a prerequisite for this measurement to select an intended plane of focus.
Web-based GIS for spatial pattern detection: application to malaria incidence in Vietnam.
Bui, Thanh Quang; Pham, Hai Minh
2016-01-01
There is a great concern on how to build up an interoperable health information system of public health and health information technology within the development of public information and health surveillance programme. Technically, some major issues remain regarding to health data visualization, spatial processing of health data, health information dissemination, data sharing and the access of local communities to health information. In combination with GIS, we propose a technical framework for web-based health data visualization and spatial analysis. Data was collected from open map-servers and geocoded by open data kit package and data geocoding tools. The Web-based system is designed based on Open-source frameworks and libraries. The system provides Web-based analyst tool for pattern detection through three spatial tests: Nearest neighbour, K function, and Spatial Autocorrelation. The result is a web-based GIS, through which end users can detect disease patterns via selecting area, spatial test parameters and contribute to managers and decision makers. The end users can be health practitioners, educators, local communities, health sector authorities and decision makers. This web-based system allows for the improvement of health related services to public sector users as well as citizens in a secure manner. The combination of spatial statistics and web-based GIS can be a solution that helps empower health practitioners in direct and specific intersectional actions, thus provide for better analysis, control and decision-making.
Reliability and validity of the Microsoft Kinect for evaluating static foot posture
2013-01-01
Background The evaluation of foot posture in a clinical setting is useful to screen for potential injury, however disagreement remains as to which method has the greatest clinical utility. An inexpensive and widely available imaging system, the Microsoft Kinect™, may possess the characteristics to objectively evaluate static foot posture in a clinical setting with high accuracy. The aim of this study was to assess the intra-rater reliability and validity of this system for assessing static foot posture. Methods Three measures were used to assess static foot posture; traditional visual observation using the Foot Posture Index (FPI), a 3D motion analysis (3DMA) system and software designed to collect and analyse image and depth data from the Kinect. Spearman’s rho was used to assess intra-rater reliability and concurrent validity of the Kinect to evaluate foot posture, and a linear regression was used to examine the ability of the Kinect to predict total visual FPI score. Results The Kinect demonstrated moderate to good intra-rater reliability for four FPI items of foot posture (ρ = 0.62 to 0.78) and moderate to good correlations with the 3DMA system for four items of foot posture (ρ = 0.51 to 0.85). In contrast, intra-rater reliability of visual FPI items was poor to moderate (ρ = 0.17 to 0.63), and correlations with the Kinect and 3DMA systems were poor (absolute ρ = 0.01 to 0.44). Kinect FPI items with moderate to good reliability predicted 61% of the variance in total visual FPI score. Conclusions The majority of the foot posture items derived using the Kinect were more reliable than the traditional visual assessment of FPI, and were valid when compared to a 3DMA system. Individual foot posture items recorded using the Kinect were also shown to predict a moderate degree of variance in the total visual FPI score. Combined, these results support the future potential of the Kinect to accurately evaluate static foot posture in a clinical setting. PMID:23566934
Keshner, E A; Dhaher, Y
2008-07-01
Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29-31 years) and 3 visually sensitive (27-57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a three-dimensional model of joint motion was developed to examine gross head motion in three planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field could modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms.
3D-Web-GIS RFID location sensing system for construction objects.
Ko, Chien-Ho
2013-01-01
Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.
3D-Web-GIS RFID Location Sensing System for Construction Objects
2013-01-01
Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821
East China Sea Storm Surge Modeling and Visualization System: the Typhoon Soulik case.
Deng, Zengan; Zhang, Feng; Kang, Linchong; Jiang, Xiaoyi; Jin, Jiye; Wang, Wei
2014-01-01
East China Sea (ECS) Storm Surge Modeling System (ESSMS) is developed based on Regional Ocean Modeling System (ROMS). Case simulation is performed on the Typhoon Soulik, which landed on the coastal region of Fujian Province, China, at 6 pm of July 13, 2013. Modeling results show that the maximum tide level happened at 6 pm, which was also the landing time of Soulik. This accordance may lead to significant storm surge and water level rise in the coastal region. The water level variation induced by high winds of Soulik ranges from -0.1 to 0.15 m. Water level generally increases near the landing place, in particular on the left hand side of the typhoon track. It is calculated that 0.15 m water level rise in this region can cause a submerge increase of ~0.2 km(2), which could be catastrophic to the coastal environment and the living. Additionally, a Globe Visualization System (GVS) is realized on the basis of World Wind to better provide users with the typhoon/storm surge information. The main functions of GVS include data indexing, browsing, analyzing, and visualization. GVS is capable of facilitating the precaution and mitigation of typhoon/storm surge in ESC in combination with ESSMS.
NASA Astrophysics Data System (ADS)
Kachejian, Kerry C.; Vujcic, Doug
1999-07-01
The Tactical Visualization Module (TVM) research effort will develop and demonstrate a portable, tactical information system to enhance the situational awareness of individual warfighters and small military units by providing real-time access to manned and unmanned aircraft, tactically mobile robots, and unattended sensors. TVM consists of a family of portable and hand-held devices being advanced into a next- generation, embedded capability. It enables warfighters to visualize the tactical situation by providing real-time video, imagery, maps, floor plans, and 'fly-through' video on demand. When combined with unattended ground sensors, such as Combat- Q, TVM permits warfighters to validate and verify tactical targets. The use of TVM results in faster target engagement times, increased survivability, and reduction of the potential for fratricide. TVM technology can support both mounted and dismounted tactical forces involved in land, sea, and air warfighting operations. As a PCMCIA card, TVM can be embedded in portable, hand-held, and wearable PCs. Thus, it leverages emerging tactical displays including flat-panel, head-mounted displays. The end result of the program will be the demonstration of the system with U.S. Army and USMC personnel in an operational environment. Raytheon Systems Company, the U.S. Army Soldier Systems Command -- Natick RDE Center (SSCOM- NRDEC) and the Defense Advanced Research Projects Agency (DARPA) are partners in developing and demonstrating the TVM technology.
Auditory, visual, and bimodal data link displays and how they support pilot performance.
Steelman, Kelly S; Talleur, Donald; Carbonari, Ronald; Yamani, Yusuke; Nunes, Ashley; McCarley, Jason S
2013-06-01
The design of data link messaging systems to ensure optimal pilot performance requires empirical guidance. The current study examined the effects of display format (auditory, visual, or bimodal) and visual display position (adjacent to instrument panel or mounted on console) on pilot performance. Subjects performed five 20-min simulated single-pilot flights. During each flight, subjects received messages from a simulated air traffic controller. Messages were delivered visually, auditorily, or bimodally. Subjects were asked to read back each message aloud and then perform the instructed maneuver. Visual and bimodal displays engendered lower subjective workload and better altitude tracking than auditory displays. Readback times were shorter with the two unimodal visual formats than with any of the other three formats. Advantages for the unimodal visual format ranged in size from 2.8 s to 3.8 s relative to the bimodal upper left and auditory formats, respectively. Auditory displays allowed slightly more head-up time (3 to 3.5 seconds per minute) than either visual or bimodal displays. Position of the visual display had only modest effects on any measure. Combined with the results from previous studies by Helleberg and Wickens and Lancaster and Casali the current data favor visual and bimodal displays over auditory displays; unimodal auditory displays were favored by only one measure, head-up time, and only very modestly. Data evinced no statistically significant effects of visual display position on performance, suggesting that, contrary to expectations, the placement of a visual data link display may be of relatively little consequence to performance.
Structural and functional changes across the visual cortex of a patient with visual form agnosia.
Bridge, Holly; Thomas, Owen M; Minini, Loredana; Cavina-Pratesi, Cristiana; Milner, A David; Parker, Andrew J
2013-07-31
Loss of shape recognition in visual-form agnosia occurs without equivalent losses in the use of vision to guide actions, providing support for the hypothesis of two visual systems (for "perception" and "action"). The human individual DF received a toxic exposure to carbon monoxide some years ago, which resulted in a persisting visual-form agnosia that has been extensively characterized at the behavioral level. We conducted a detailed high-resolution MRI study of DF's cortex, combining structural and functional measurements. We present the first accurate quantification of the changes in thickness across DF's occipital cortex, finding the most substantial loss in the lateral occipital cortex (LOC). There are reduced white matter connections between LOC and other areas. Functional measures show pockets of activity that survive within structurally damaged areas. The topographic mapping of visual areas showed that ordered retinotopic maps were evident for DF in the ventral portions of visual cortical areas V1, V2, V3, and hV4. Although V1 shows evidence of topographic order in its dorsal portion, such maps could not be found in the dorsal parts of V2 and V3. We conclude that it is not possible to understand fully the deficits in object perception in visual-form agnosia without the exploitation of both structural and functional measurements. Our results also highlight for DF the cortical routes through which visual information is able to pass to support her well-documented abilities to use visual information to guide actions.
Influence of high ambient illuminance and display luminance on readability and subjective preference
NASA Astrophysics Data System (ADS)
De Moor, Katrien; Andrén, Börje; Guo, Yi; Brunnström, Kjell; Wang, Kun; Drott, Anton; Hermann, David S.
2015-03-01
Many devices, such as tablets, smartphones, notebooks, fixed and portable navigation systems are used on a (nearly) daily basis, both in in- and outdoor environments. It is often argued that contextual factors, such as the ambient illuminance in relation to characteristics of the display (e.g., surface treatment, screen reflectance, display luminance …) may have a strong influence on the use of such devices and corresponding user experiences. However, the current understanding of these influence factors is still rather limited. In this work, we therefore focus in particular on the impact of lighting and display luminance on readability, visual performance, subjective experience and preference. A controlled lab study (N=18) with a within-subjects design was performed to evaluate two car displays (one glossy and one matte display) in conditions that simulate bright outdoor lighting conditions. Four ambient luminance levels and three display luminance settings were combined into 7 experimental conditions. More concretely, we investigated for each display: (1) whether and how readability and visual performance varied with the different combinations of ambient luminance and display luminance and (2) whether and how they influenced the subjective experience (through self-reported valence, annoyance, visual fatigue) and preference. The results indicate a limited, yet negative influence of increased ambient luminance and reduced contrast on visual performance and readability for both displays. Similarly, we found that the self-reported valence decreases and annoyance and visual fatigue increase as the contrast ratio decreases and ambient luminance increases. Overall, the impact is clearer for the matte display than for the glossy display.
Boyes, William K; Degn, Laura L; Martin, Sheppard A; Lyke, Danielle F; Hamm, Charles W; Herr, David W
2014-01-01
Ethanol-blended gasoline entered the market in response to demand for domestic renewable energy sources, and may result in increased inhalation of ethanol vapors in combination with other volatile gasoline constituents. It is important to understand potential risks of inhalation of ethanol vapors by themselves, and also as a baseline for evaluating the risks of ethanol combined with a complex mixture of hydrocarbon vapors. Because sensory dysfunction has been reported after developmental exposure to ethanol, we evaluated the effects of developmental exposure to ethanol vapors on neurophysiological measures of sensory function as a component of a larger project evaluating developmental ethanol toxicity. Pregnant Long-Evans rats were exposed to target concentrations 0, 5000, 10,000, or 21,000 ppm ethanol vapors for 6.5h/day over GD9-GD20. Sensory evaluations of male offspring began between PND106 and PND128. Peripheral nerve function (compound action potentials, nerve conduction velocity (NCV)), somatosensory (cortical and cerebellar evoked potentials), auditory (brainstem auditory evoked responses), and visual evoked responses were assessed. Visual function assessment included pattern elicited visual evoked potentials (VEPs), VEP contrast sensitivity, and electroretinograms recorded from dark-adapted (scotopic), light-adapted (photopic) flashes, and UV flicker and green flicker. No consistent concentration-related changes were observed for any of the physiological measures. The results show that gestational exposure to ethanol vapor did not result in detectable changes in peripheral nerve, somatosensory, auditory, or visual function when the offspring were assessed as adults. Published by Elsevier Inc.
CRAVE: a database, middleware and visualization system for phenotype ontologies.
Gkoutos, Georgios V; Green, Eain C J; Greenaway, Simon; Blake, Andrew; Mallon, Ann-Marie; Hancock, John M
2005-04-01
A major challenge in modern biology is to link genome sequence information to organismal function. In many organisms this is being done by characterizing phenotypes resulting from mutations. Efficiently expressing phenotypic information requires combinatorial use of ontologies. However tools are not currently available to visualize combinations of ontologies. Here we describe CRAVE (Concept Relation Assay Value Explorer), a package allowing storage, active updating and visualization of multiple ontologies. CRAVE is a web-accessible JAVA application that accesses an underlying MySQL database of ontologies via a JAVA persistent middleware layer (Chameleon). This maps the database tables into discrete JAVA classes and creates memory resident, interlinked objects corresponding to the ontology data. These JAVA objects are accessed via calls through the middleware's application programming interface. CRAVE allows simultaneous display and linking of multiple ontologies and searching using Boolean and advanced searches.
Image gathering and digital restoration for fidelity and visual quality
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1991-01-01
The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.
Almeida, Diogo; Poeppel, David; Corina, David
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.
Usability of stereoscopic view in teleoperation
NASA Astrophysics Data System (ADS)
Boonsuk, Wutthigrai
2015-03-01
Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.
REACH: Real-Time Data Awareness in Multi-Spacecraft Missions
NASA Technical Reports Server (NTRS)
Maks, Lori; Coleman, Jason; Obenschain, Arthur F. (Technical Monitor)
2002-01-01
Missions have been proposed that will use multiple spacecraft to perform scientific or commercial tasks. Indeed, in the commercial world, some spacecraft constellations already exist. Aside from the technical challenges of constructing and flying these missions, there is also the financial challenge presented by the tradition model of the flight operations team (FOT) when it is applied to a constellation mission. Proposed constellation missions range in size from three spacecraft to more than 50. If the current ratio of three-to-five FOT personnel per spacecraft is maintained, the size of the FOT becomes cost prohibitive. The Advanced Architectures and Automation Branch at the Goddard Space Flight Center (GSFC Code 588) saw the potential to reduce the cost of these missions by creating new user interfaces to the ground system health-and-safety data. The goal is to enable a smaller FOT to remain aware and responsive to the increased amount of ground system information in a multi-spacecraft environment. Rather than abandon the tried and true, these interfaces were developed to run alongside existing ground system software to provide additional support to the FOT. These new user interfaces have been combined in a tool called REACH. REACH-the Real-time Evaluation and Analysis of Consolidated Health-is a software product that uses advanced visualization techniques to make spacecraft anomalies easy to spot, no matter how many spacecraft are in the constellation. REACH reads a real-time stream of data from the ground system and displays it to the FOT such that anomalies are easy to pick out and investigate. Data visualization has been used in ground system operations for many years. To provide a unique visualization tool, we developed a unique source of data to visualize: the REACH Health Model Engine. The Health Model Engine is rule-based software that receives real-time telemetry information and outputs "health" information related to the subsystems and spacecraft that the telemetry belong to. The Health Engine can run out-of-the-box or can be tailored with a scripting language. Out of the box, it uses limit violations to determine the health of subsystems and spacecraft; when tailored, it determines health using equations combining the values and limits of any telemetry in the spacecraft. The REACH visualizations then "roll up" the information from the Health Engine into high level, summary displays. These summary visualizations can be "zoomed" into for increasing levels of detail. Currently REACH is installed in the Small Explorer (SMEX) lab at GSFC, and is monitoring three of their five spacecraft. We are scheduled to install REACH in the Mid-sized Explorer (MIDEX) lab, which will allow us to monitor up to six more spacecraft. The process of installing and using our "research" software in an operational environment has provided many insights into which parts of REACH are a step forward and which of our ideas are missteps. Our paper explores both the new concepts in spacecraft health-and-safety visualization, the difficulties of such systems in the operational environment, and the cost and safety issues of multi-spacecraft missions.
Lin, Jiarui; Gao, Kai; Gao, Yang; Wang, Zheng
2017-10-01
In order to detect the position of the cutting shield at the head of a double shield tunnel boring machine (TBM) during the excavation, this paper develops a combined measurement system which is mainly composed of several optical feature points, a monocular vision sensor, a laser target sensor, and a total station. The different elements of the combined system are mounted on the TBM in suitable sequence, and the position of the cutting shield in the reference total station frame is determined by coordinate transformations. Subsequently, the structure of the feature points and matching technique for them are expounded, the position measurement method based on monocular vision is presented, and the calibration methods for the unknown relationships among different parts of the system are proposed. Finally, a set of experimental platforms to simulate the double shield TBM is established, and accuracy verification experiments are conducted. Experimental results show that the mean deviation of the system is 6.8 mm, which satisfies the requirements of double shield TBM guidance.
An interactive visualization tool for mobile objects
NASA Astrophysics Data System (ADS)
Kobayashi, Tetsuo
Recent advancements in mobile devices---such as Global Positioning System (GPS), cellular phones, car navigation system, and radio-frequency identification (RFID)---have greatly influenced the nature and volume of data about individual-based movement in space and time. Due to the prevalence of mobile devices, vast amounts of mobile objects data are being produced and stored in databases, overwhelming the capacity of traditional spatial analytical methods. There is a growing need for discovering unexpected patterns, trends, and relationships that are hidden in the massive mobile objects data. Geographic visualization (GVis) and knowledge discovery in databases (KDD) are two major research fields that are associated with knowledge discovery and construction. Their major research challenges are the integration of GVis and KDD, enhancing the ability to handle large volume mobile objects data, and high interactivity between the computer and users of GVis and KDD tools. This dissertation proposes a visualization toolkit to enable highly interactive visual data exploration for mobile objects datasets. Vector algebraic representation and online analytical processing (OLAP) are utilized for managing and querying the mobile object data to accomplish high interactivity of the visualization tool. In addition, reconstructing trajectories at user-defined levels of temporal granularity with time aggregation methods allows exploration of the individual objects at different levels of movement generality. At a given level of generality, individual paths can be combined into synthetic summary paths based on three similarity measures, namely, locational similarity, directional similarity, and geometric similarity functions. A visualization toolkit based on the space-time cube concept exploits these functionalities to create a user-interactive environment for exploring mobile objects data. Furthermore, the characteristics of visualized trajectories are exported to be utilized for data mining, which leads to the integration of GVis and KDD. Case studies using three movement datasets (personal travel data survey in Lexington, Kentucky, wild chicken movement data in Thailand, and self-tracking data in Utah) demonstrate the potential of the system to extract meaningful patterns from the otherwise difficult to comprehend collections of space-time trajectories.
Multimedia-assisted breathwalk-aware system.
Yu, Meng-Chieh; Wu, Huan; Lee, Ming-Sui; Hung, Yi-Ping
2012-12-01
Breathwalk is a science of combining specific patterns of footsteps synchronized with the breathing. In this study, we developed a multimedia-assisted Breathwalk-aware system which detects user's walking and breathing conditions and provides appropriate multimedia guidance on the smartphone. Through the mobile device, the system enhances user's awareness of walking and breathing behaviors. As an example application in slow technology, the system could help meditator beginners learn "walking meditation," a type of meditation which aims to be as slow as possible in taking pace, to synchronize footstep with breathing, and to land every footstep with toes first. In the pilot study, we developed a walking-aware system and evaluated whether multimedia-assisted mechanism is capable of enhancing beginner's walking awareness while walking meditation. Experimental results show that it could effectively assist beginners in slowing down the walking speed and decreasing incorrect footsteps. In the second experiment, we evaluated the Breathwalk-aware system to find a better feedback mechanism for learning the techniques of Breathwalk while walking meditation. The experimental results show that the visual-auditory mechanism is a better multimedia-assisted mechanism while walking meditation than visual mechanism and auditory mechanism.
Developing mobile- and BIM-based integrated visual facility maintenance management system.
Lin, Yu-Cheng; Su, Yu-Chih
2013-01-01
Facility maintenance management (FMM) has become an important topic for research on the operation phase of the construction life cycle. Managing FMM effectively is extremely difficult owing to various factors and environments. One of the difficulties is the performance of 2D graphics when depicting maintenance service. Building information modeling (BIM) uses precise geometry and relevant data to support the maintenance service of facilities depicted in 3D object-oriented CAD. This paper proposes a new and practical methodology with application to FMM using BIM technology. Using BIM technology, this study proposes a BIM-based facility maintenance management (BIMFMM) system for maintenance staff in the operation and maintenance phase. The BIMFMM system is then applied in selected case study of a commercial building project in Taiwan to verify the proposed methodology and demonstrate its effectiveness in FMM practice. Using the BIMFMM system, maintenance staff can access and review 3D BIM models for updating related maintenance records in a digital format. Moreover, this study presents a generic system architecture and its implementation. The combined results demonstrate that a BIMFMM-like system can be an effective visual FMM tool.
Chung, Younjin; Salvador-Carulla, Luis; Salinas-Pérez, José A; Uriarte-Uriarte, Jose J; Iruin-Sanz, Alvaro; García-Alonso, Carlos R
2018-04-25
Decision-making in mental health systems should be supported by the evidence-informed knowledge transfer of data. Since mental health systems are inherently complex, involving interactions between its structures, processes and outcomes, decision support systems (DSS) need to be developed using advanced computational methods and visual tools to allow full system analysis, whilst incorporating domain experts in the analysis process. In this study, we use a DSS model developed for interactive data mining and domain expert collaboration in the analysis of complex mental health systems to improve system knowledge and evidence-informed policy planning. We combine an interactive visual data mining approach, the self-organising map network (SOMNet), with an operational expert knowledge approach, expert-based collaborative analysis (EbCA), to develop a DSS model. The SOMNet was applied to the analysis of healthcare patterns and indicators of three different regional mental health systems in Spain, comprising 106 small catchment areas and providing healthcare for over 9 million inhabitants. Based on the EbCA, the domain experts in the development team guided and evaluated the analytical processes and results. Another group of 13 domain experts in mental health systems planning and research evaluated the model based on the analytical information of the SOMNet approach for processing information and discovering knowledge in a real-world context. Through the evaluation, the domain experts assessed the feasibility and technology readiness level (TRL) of the DSS model. The SOMNet, combined with the EbCA, effectively processed evidence-based information when analysing system outliers, explaining global and local patterns, and refining key performance indicators with their analytical interpretations. The evaluation results showed that the DSS model was feasible by the domain experts and reached level 7 of the TRL (system prototype demonstration in operational environment). This study supports the benefits of combining health systems engineering (SOMNet) and expert knowledge (EbCA) to analyse the complexity of health systems research. The use of the SOMNet approach contributes to the demonstration of DSS for mental health planning in practice.
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2001-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2000-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
Satagopam, Venkata; Gu, Wei; Eifes, Serge; Gawron, Piotr; Ostaszewski, Marek; Gebel, Stephan; Barbosa-Silva, Adriano; Balling, Rudi; Schneider, Reinhard
2016-01-01
Abstract Translational medicine is a domain turning results of basic life science research into new tools and methods in a clinical environment, for example, as new diagnostics or therapies. Nowadays, the process of translation is supported by large amounts of heterogeneous data ranging from medical data to a whole range of -omics data. It is not only a great opportunity but also a great challenge, as translational medicine big data is difficult to integrate and analyze, and requires the involvement of biomedical experts for the data processing. We show here that visualization and interoperable workflows, combining multiple complex steps, can address at least parts of the challenge. In this article, we present an integrated workflow for exploring, analysis, and interpretation of translational medicine data in the context of human health. Three Web services—tranSMART, a Galaxy Server, and a MINERVA platform—are combined into one big data pipeline. Native visualization capabilities enable the biomedical experts to get a comprehensive overview and control over separate steps of the workflow. The capabilities of tranSMART enable a flexible filtering of multidimensional integrated data sets to create subsets suitable for downstream processing. A Galaxy Server offers visually aided construction of analytical pipelines, with the use of existing or custom components. A MINERVA platform supports the exploration of health and disease-related mechanisms in a contextualized analytical visualization system. We demonstrate the utility of our workflow by illustrating its subsequent steps using an existing data set, for which we propose a filtering scheme, an analytical pipeline, and a corresponding visualization of analytical results. The workflow is available as a sandbox environment, where readers can work with the described setup themselves. Overall, our work shows how visualization and interfacing of big data processing services facilitate exploration, analysis, and interpretation of translational medicine data. PMID:27441714
Herrerías Gutiérrez, J M; García Montes, J
1994-01-01
The main objective of this study was to determine whether the effect of a combination of clebopride (0.5 mg) and simethicone (200 mg) would improve echographic visualization of retrogastric organs. An experimental aerogastric induction model was used in 50 healthy volunteers, who received 30 mL of beaten egg white to decrease the quality of echographic visualization. Improvements were evaluated after treatment. The results show that the combination of clebopride and simethicone was better than placebo in reducing gastric distension and in improving the quality of echographic images of the organs located behind the stomach, that is, the gallbladder, pancreas, and left kidney.
Integrated visualization of remote sensing data using Google Earth
NASA Astrophysics Data System (ADS)
Castella, M.; Rigo, T.; Argemi, O.; Bech, J.; Pineda, N.; Vilaclara, E.
2009-09-01
The need for advanced visualization tools for meteorological data has lead in the last years to the development of sophisticated software packages either by observing systems manufacturers or by third-party solution providers. For example, manufacturers of remote sensing systems such as weather radars or lightning detection systems include zoom, product selection, archive access capabilities, as well as quantitative tools for data analysis, as standard features which are highly appreciated in weather surveillance or post-event case study analysis. However, the fact that each manufacturer has its own visualization system and data formats hampers the usability and integration of different data sources. In this context, Google Earth (GE) offers the possibility of combining several graphical information types in a unique visualization system which can be easily accessed by users. The Meteorological Service of Catalonia (SMC) has been evaluating the use of GE as a visualization platform for surveillance tasks in adverse weather events. First experiences are related to the integration in real-time of remote sensing data: radar, lightning, and satellite. The tool shows the animation of the combined products in the last hour, giving a good picture of the meteorological situation. One of the main advantages of this product is that is easy to be installed in many computers and does not need high computational requirements. Besides this, the capability of GE provides information about the most affected areas by heavy rain or other weather phenomena. On the opposite, the main disadvantage is that the product offers only qualitative information, and quantitative data is only available though the graphical display (i.e. trough color scales but not associated to physical values that can be accessed by users easily). The procedure developed to run in real time is divided in three parts. First of all, a crontab file launches different applications, depending on the data type (satellite, radar, or lightning) to be treated. For each type of data, the time of launching is different, and goes from 5 (satellite and lightning) to 6 minutes (radar). The second part is the use of IDL and ENVI programs, which search in each archive file the last images in one hour. In the case of lightning data, the files are generated for the procedure, while for the others the procedure searches for existing imagery. Finally, the procedure generates metadata information required by GE, kml files, and sends them to the internal server. At the same time, in the local computer where GE is running, there exists kml files which update the information referring to the server ones. Another application that has been evaluated is the analysis of past events. In this sense, further work is devoted to develop access procedures to archived data via cgi scripts in order to retrieve and convert the information in a format suitable for GE. The presentation includes examples of the evaluation of the use of GE, and a brief comparison with other existing visualization systems available within the SMC.
Harris, Joseph A; Wu, Chien-Te; Woldorff, Marty G
2011-06-07
It is generally agreed that considerable amounts of low-level sensory processing of visual stimuli can occur without conscious awareness. On the other hand, the degree of higher level visual processing that occurs in the absence of awareness is as yet unclear. Here, event-related potential (ERP) measures of brain activity were recorded during a sandwich-masking paradigm, a commonly used approach for attenuating conscious awareness of visual stimulus content. In particular, the present study used a combination of ERP activation contrasts to track both early sensory-processing ERP components and face-specific N170 ERP activations, in trials with versus without awareness. The electrophysiological measures revealed that the sandwich masking abolished the early face-specific N170 neural response (peaking at ~170 ms post-stimulus), an effect that paralleled the abolition of awareness of face versus non-face image content. Furthermore, however, the masking appeared to render a strong attenuation of earlier feedforward visual sensory-processing signals. This early attenuation presumably resulted in insufficient information being fed into the higher level visual system pathways specific to object category processing, thus leading to unawareness of the visual object content. These results support a coupling of visual awareness and neural indices of face processing, while also demonstrating an early low-level mechanism of interference in sandwich masking.
Flow Visualization Techniques in Wind Tunnel Tests of a Full-Scale F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Lanser, Wendy R.; Botha, Gavin J.; James, Kevin D.; Bennett, Mark; Crowder, James P.; Cooper, Don; Olson, Lawrence (Technical Monitor)
1994-01-01
The proposed paper presents flow visualization performed during experiments conducted on a full-scale F/A-18 aircraft in the 80- by 120-Foot Wind-Tunnel at NASA Ames Research Center. The purpose of the flow-visualization experiments was to document the forebody and leading edge extension (LEX) vortex interaction along with the wing flow patterns at high angles of attack and low speed high Reynolds number conditions. This investigation used surface pressures in addition to both surface and off-surface flow visualization techniques to examine the flow field on the forebody, canopy, LEXS, and wings. The various techniques used to visualize the flow field were fluorescent tufts, flow cones treated with reflective material, smoke in combination with a laser light sheet, and a video imaging system for three-dimension vortex tracking. The flow visualization experiments were conducted over an angle of attack range from 20 deg to 45 deg and over a sideslip range from -10 deg to 10 deg. The various visualization techniques as well as the pressure distributions were used to understand the flow field structure. The results show regions of attached and separated flow on the forebody, canopy, and wings as well as the vortical flow over the leading-edge extensions. This paper will also present flow visualization comparisons with the F-18 HARV flight vehicle and small-scale oil flows on the F-18.
A 3D visualization and guidance system for handheld optical imaging devices
NASA Astrophysics Data System (ADS)
Azar, Fred S.; de Roquemaurel, Benoit; Cerussi, Albert; Hajjioui, Nassim; Li, Ang; Tromberg, Bruce J.; Sauer, Frank
2007-03-01
We have developed a novel 3D visualization and guidance system for handheld optical imaging devices. In this paper, the system is applied to measurements of breast/cancerous tissue optical properties using a handheld diffuse optical spectroscopy (DOS) instrument. The combined guidance system/DOS instrument becomes particularly useful for monitoring neoadjuvant chemotherapy in breast cancer patients and for longitudinal studies where measurement reproducibility is critical. The system uses relatively inexpensive hardware components and comprises a 6 degrees-of-freedom (DOF) magnetic tracking device including a DC field generator, three sensors, and a PCI card running on a PC workstation. A custom-built virtual environment combined with a well-defined workflow provide the means for image-guided measurements, improved longitudinal studies of breast optical properties, 3D reconstruction of optical properties within the anatomical map, and serial data registration. The DOS instrument characterizes tissue function such as water, lipid and total hemoglobin concentration. The patient lies on her back at a 45-degrees angle. Each spectral measurement requires consistent contact with the skin, and lasts about 5-10 seconds. Therefore a limited number of positions may be studied. In a reference measurement session, the physician acquires surface points on the breast. A Delaunay-based triangulation algorithm is used to build the virtual breast surface from the acquired points. 3D locations of all DOS measurements are recorded. All subsequently acquired surfaces are automatically registered to the reference surface, thus allowing measurement reproducibility through image guidance using the reference measurements.
Abu El-Asrar, Ahmed M; Dosari, Mona; Hemachandran, Suhail; Gikandi, Priscilla W; Al-Muammar, Abdulrahman
2017-02-01
To evaluate the effectiveness and safety of mycophenolate mofetil (MMF) as first-line therapy combined with systemic corticosteroids in initial-onset acute uveitis associated with Vogt-Koyanagi-Harada (VKH) disease. This prospective study included 38 patients (76 eyes). The main outcome measures were final visual acuity, corticosteroid-sparing effect, progression to chronic recurrent granulomatous uveitis and development of complications, particularly 'sunset glow fundus'. The mean follow-up period was 37.0 ± 29.3 (range 9-120 months). Visual acuity of 20/20 was achieved by 93.4% of the eyes. Corticosteroid-sparing effect was achieved in all patients. The mean interval between starting treatment and tapering to 10 mg or less daily was 3.8 ± 1.3 months (range 3-7 months). Twenty-two patients (57.9%) discontinued treatment without relapse of inflammation. The mean time observed off of treatment was 28.1 ± 19.6 months (range 1-60 months). None of the eyes progressed to chronic recurrent granulomatous uveitis. The ocular complications encountered were glaucoma in two eyes (2.6%) and cataract in five eyes (6.6%). None of the eyes developed 'sunset glow fundus', and none of the patients developed any systemic adverse events associated with the treatment. Use of MMF as first-line therapy combined with systemic corticosteroids in patients with initial-onset acute VKH disease prevents progression to chronic recurrent granulomatous inflammation and development of 'sunset glow fundus'. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Foot placement relies on state estimation during visually guided walking.
Maeda, Rodrigo S; O'Connor, Shawn M; Donelan, J Maxwell; Marigold, Daniel S
2017-02-01
As we walk, we must accurately place our feet to stabilize our motion and to navigate our environment. We must also achieve this accuracy despite imperfect sensory feedback and unexpected disturbances. In this study we tested whether the nervous system uses state estimation to beneficially combine sensory feedback with forward model predictions to compensate for these challenges. Specifically, subjects wore prism lenses during a visually guided walking task, and we used trial-by-trial variation in prism lenses to add uncertainty to visual feedback and induce a reweighting of this input. To expose altered weighting, we added a consistent prism shift that required subjects to adapt their estimate of the visuomotor mapping relationship between a perceived target location and the motor command necessary to step to that position. With added prism noise, subjects responded to the consistent prism shift with smaller initial foot placement error but took longer to adapt, compatible with our mathematical model of the walking task that leverages state estimation to compensate for noise. Much like when we perform voluntary and discrete movements with our arms, it appears our nervous systems uses state estimation during walking to accurately reach our foot to the ground. Accurate foot placement is essential for safe walking. We used computational models and human walking experiments to test how our nervous system achieves this accuracy. We find that our control of foot placement beneficially combines sensory feedback with internal forward model predictions to accurately estimate the body's state. Our results match recent computational neuroscience findings for reaching movements, suggesting that state estimation is a general mechanism of human motor control. Copyright © 2017 the American Physiological Society.
Slater, Heather; Milne, Alice E; Wilson, Benjamin; Muers, Ross S; Balezeau, Fabien; Hunter, David; Thiele, Alexander; Griffiths, Timothy D; Petkov, Christopher I
2016-08-30
Head immobilisation is often necessary for neuroscientific procedures. A number of Non-invasive Head Immobilisation Systems (NHIS) for monkeys are available, but the need remains for a feasible integrated system combining a broad range of essential features. We developed an individualised macaque NHIS addressing several animal welfare and scientific needs. The system comprises a customised-to-fit facemask that can be used separately or combined with a back piece to form a full-head helmet. The system permits presentation of visual and auditory stimuli during immobilisation and provides mouth access for reward. The facemask was incorporated into an automated voluntary training system, allowing the animals to engage with it for increasing periods leading to full head immobilisation. We evaluated the system during performance on several auditory or visual behavioural tasks with testing sessions lasting 1.5-2h, used thermal imaging to monitor for and prevent pressure points, and measured head movement using MRI. A comprehensive evaluation of the system is provided in relation to several scientific and animal welfare requirements. Behavioural results were often comparable to those obtained with surgical implants. Cost-benefit analyses were conducted comparing the system with surgical options, highlighting the benefits of implementing the non-invasive option. The system has a number of potential applications and could be an important tool in neuroscientific research, when direct access to the brain for neuronal recordings is not required, offering the opportunity to conduct non-invasive experiments while improving animal welfare and reducing reliance on surgically implanted head posts. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
Towards human-computer synergetic analysis of large-scale biological data.
Singh, Rahul; Yang, Hui; Dalziel, Ben; Asarnow, Daniel; Murad, William; Foote, David; Gormley, Matthew; Stillman, Jonathan; Fisher, Susan
2013-01-01
Advances in technology have led to the generation of massive amounts of complex and multifarious biological data in areas ranging from genomics to structural biology. The volume and complexity of such data leads to significant challenges in terms of its analysis, especially when one seeks to generate hypotheses or explore the underlying biological processes. At the state-of-the-art, the application of automated algorithms followed by perusal and analysis of the results by an expert continues to be the predominant paradigm for analyzing biological data. This paradigm works well in many problem domains. However, it also is limiting, since domain experts are forced to apply their instincts and expertise such as contextual reasoning, hypothesis formulation, and exploratory analysis after the algorithm has produced its results. In many areas where the organization and interaction of the biological processes is poorly understood and exploratory analysis is crucial, what is needed is to integrate domain expertise during the data analysis process and use it to drive the analysis itself. In context of the aforementioned background, the results presented in this paper describe advancements along two methodological directions. First, given the context of biological data, we utilize and extend a design approach called experiential computing from multimedia information system design. This paradigm combines information visualization and human-computer interaction with algorithms for exploratory analysis of large-scale and complex data. In the proposed approach, emphasis is laid on: (1) allowing users to directly visualize, interact, experience, and explore the data through interoperable visualization-based and algorithmic components, (2) supporting unified query and presentation spaces to facilitate experimentation and exploration, (3) providing external contextual information by assimilating relevant supplementary data, and (4) encouraging user-directed information visualization, data exploration, and hypotheses formulation. Second, to illustrate the proposed design paradigm and measure its efficacy, we describe two prototype web applications. The first, called XMAS (Experiential Microarray Analysis System) is designed for analysis of time-series transcriptional data. The second system, called PSPACE (Protein Space Explorer) is designed for holistic analysis of structural and structure-function relationships using interactive low-dimensional maps of the protein structure space. Both these systems promote and facilitate human-computer synergy, where cognitive elements such as domain knowledge, contextual reasoning, and purpose-driven exploration, are integrated with a host of powerful algorithmic operations that support large-scale data analysis, multifaceted data visualization, and multi-source information integration. The proposed design philosophy, combines visualization, algorithmic components and cognitive expertise into a seamless processing-analysis-exploration framework that facilitates sense-making, exploration, and discovery. Using XMAS, we present case studies that analyze transcriptional data from two highly complex domains: gene expression in the placenta during human pregnancy and reaction of marine organisms to heat stress. With PSPACE, we demonstrate how complex structure-function relationships can be explored. These results demonstrate the novelty, advantages, and distinctions of the proposed paradigm. Furthermore, the results also highlight how domain insights can be combined with algorithms to discover meaningful knowledge and formulate evidence-based hypotheses during the data analysis process. Finally, user studies against comparable systems indicate that both XMAS and PSPACE deliver results with better interpretability while placing lower cognitive loads on the users. XMAS is available at: http://tintin.sfsu.edu:8080/xmas. PSPACE is available at: http://pspace.info/.
Towards human-computer synergetic analysis of large-scale biological data
2013-01-01
Background Advances in technology have led to the generation of massive amounts of complex and multifarious biological data in areas ranging from genomics to structural biology. The volume and complexity of such data leads to significant challenges in terms of its analysis, especially when one seeks to generate hypotheses or explore the underlying biological processes. At the state-of-the-art, the application of automated algorithms followed by perusal and analysis of the results by an expert continues to be the predominant paradigm for analyzing biological data. This paradigm works well in many problem domains. However, it also is limiting, since domain experts are forced to apply their instincts and expertise such as contextual reasoning, hypothesis formulation, and exploratory analysis after the algorithm has produced its results. In many areas where the organization and interaction of the biological processes is poorly understood and exploratory analysis is crucial, what is needed is to integrate domain expertise during the data analysis process and use it to drive the analysis itself. Results In context of the aforementioned background, the results presented in this paper describe advancements along two methodological directions. First, given the context of biological data, we utilize and extend a design approach called experiential computing from multimedia information system design. This paradigm combines information visualization and human-computer interaction with algorithms for exploratory analysis of large-scale and complex data. In the proposed approach, emphasis is laid on: (1) allowing users to directly visualize, interact, experience, and explore the data through interoperable visualization-based and algorithmic components, (2) supporting unified query and presentation spaces to facilitate experimentation and exploration, (3) providing external contextual information by assimilating relevant supplementary data, and (4) encouraging user-directed information visualization, data exploration, and hypotheses formulation. Second, to illustrate the proposed design paradigm and measure its efficacy, we describe two prototype web applications. The first, called XMAS (Experiential Microarray Analysis System) is designed for analysis of time-series transcriptional data. The second system, called PSPACE (Protein Space Explorer) is designed for holistic analysis of structural and structure-function relationships using interactive low-dimensional maps of the protein structure space. Both these systems promote and facilitate human-computer synergy, where cognitive elements such as domain knowledge, contextual reasoning, and purpose-driven exploration, are integrated with a host of powerful algorithmic operations that support large-scale data analysis, multifaceted data visualization, and multi-source information integration. Conclusions The proposed design philosophy, combines visualization, algorithmic components and cognitive expertise into a seamless processing-analysis-exploration framework that facilitates sense-making, exploration, and discovery. Using XMAS, we present case studies that analyze transcriptional data from two highly complex domains: gene expression in the placenta during human pregnancy and reaction of marine organisms to heat stress. With PSPACE, we demonstrate how complex structure-function relationships can be explored. These results demonstrate the novelty, advantages, and distinctions of the proposed paradigm. Furthermore, the results also highlight how domain insights can be combined with algorithms to discover meaningful knowledge and formulate evidence-based hypotheses during the data analysis process. Finally, user studies against comparable systems indicate that both XMAS and PSPACE deliver results with better interpretability while placing lower cognitive loads on the users. XMAS is available at: http://tintin.sfsu.edu:8080/xmas. PSPACE is available at: http://pspace.info/. PMID:24267485
Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity
ERIC Educational Resources Information Center
Loria, Tristan; de Grosbois, John; Tremblay, Luc
2016-01-01
Purpose: At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study…
ERIC Educational Resources Information Center
Austerweil, Joseph L.; Griffiths, Thomas L.; Palmer, Stephen E.
2017-01-01
How does the visual system recognize images of a novel object after a single observation despite possible variations in the viewpoint of that object relative to the observer? One possibility is comparing the image with a prototype for invariance over a relevant transformation set (e.g., translations and dilations). However, invariance over…
Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.
Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han
2017-09-07
Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.
Enhancing astronaut performance using sensorimotor adaptability training
Bloomberg, Jacob J.; Peters, Brian T.; Cohen, Helen S.; Mulavara, Ajitkumar P.
2015-01-01
Astronauts experience disturbances in balance and gait function when they return to Earth. The highly plastic human brain enables individuals to modify their behavior to match the prevailing environment. Subjects participating in specially designed variable sensory challenge training programs can enhance their ability to rapidly adapt to novel sensory situations. This is useful in our application because we aim to train astronauts to rapidly formulate effective strategies to cope with the balance and locomotor challenges associated with new gravitational environments—enhancing their ability to “learn to learn.” We do this by coupling various combinations of sensorimotor challenges with treadmill walking. A unique training system has been developed that is comprised of a treadmill mounted on a motion base to produce movement of the support surface during walking. This system provides challenges to gait stability. Additional sensory variation and challenge are imposed with a virtual visual scene that presents subjects with various combinations of discordant visual information during treadmill walking. This experience allows them to practice resolving challenging and conflicting novel sensory information to improve their ability to adapt rapidly. Information obtained from this work will inform the design of the next generation of sensorimotor countermeasures for astronauts. PMID:26441561
Enhancing astronaut performance using sensorimotor adaptability training.
Bloomberg, Jacob J; Peters, Brian T; Cohen, Helen S; Mulavara, Ajitkumar P
2015-01-01
Astronauts experience disturbances in balance and gait function when they return to Earth. The highly plastic human brain enables individuals to modify their behavior to match the prevailing environment. Subjects participating in specially designed variable sensory challenge training programs can enhance their ability to rapidly adapt to novel sensory situations. This is useful in our application because we aim to train astronauts to rapidly formulate effective strategies to cope with the balance and locomotor challenges associated with new gravitational environments-enhancing their ability to "learn to learn." We do this by coupling various combinations of sensorimotor challenges with treadmill walking. A unique training system has been developed that is comprised of a treadmill mounted on a motion base to produce movement of the support surface during walking. This system provides challenges to gait stability. Additional sensory variation and challenge are imposed with a virtual visual scene that presents subjects with various combinations of discordant visual information during treadmill walking. This experience allows them to practice resolving challenging and conflicting novel sensory information to improve their ability to adapt rapidly. Information obtained from this work will inform the design of the next generation of sensorimotor countermeasures for astronauts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu
In this project, we have developed techniques for visualizing large-scale time-varying multivariate particle and field data produced by the GPS_TTBP team. Our basic approach to particle data visualization is to provide the user with an intuitive interactive interface for exploring the data. We have designed a multivariate filtering interface for scientists to effortlessly isolate those particles of interest for revealing structures in densely packed particles as well as the temporal behaviors of selected particles. With such a visualization system, scientists on the GPS-TTBP project can validate known relationships and temporal trends, and possibly gain new insights in their simulations. Wemore » have tested the system using over several millions of particles on a single PC. We will also need to address the scalability of the system to handle billions of particles using a cluster of PCs. To visualize the field data, we choose to use direct volume rendering. Because the data provided by PPPL is on a curvilinear mesh, several processing steps have to be taken. The mesh is curvilinear in nature, following the shape of a deformed torus. Additionally, in order to properly interpolate between the given slices we cannot use simple linear interpolation in Cartesian space but instead have to interpolate along the magnetic field lines given to us by the scientists. With these limitations, building a system that can provide an accurate visualization of the dataset is quite a challenge to overcome. In the end we use a combination of deformation methods such as deformation textures in order to fit a normal torus into their deformed torus, allowing us to store the data in toroidal coordinates in order to take advantage of modern GPUs to perform the interpolation along the field lines for us. The resulting new rendering capability produces visualizations at a quality and detail level previously not available to the scientists at the PPPL. In summary, in this project we have successfully created new capabilities for the scientists to visualize their 3D data at higher accuracy and quality, enhancing their ability to evaluate the simulations and understand the modeled phenomena.« less
Kulkarni, Abhishek; Ertekin, Deniz; Lee, Chi-Hon; Hummel, Thomas
2016-01-01
The precise recognition of appropriate synaptic partner neurons is a critical step during neural circuit assembly. However, little is known about the developmental context in which recognition specificity is important to establish synaptic contacts. We show that in the Drosophila visual system, sequential segregation of photoreceptor afferents, reflecting their birth order, lead to differential positioning of their growth cones in the early target region. By combining loss- and gain-of-function analyses we demonstrate that relative differences in the expression of the transcription factor Sequoia regulate R cell growth cone segregation. This initial growth cone positioning is consolidated via cell-adhesion molecule Capricious in R8 axons. Further, we show that the initial growth cone positioning determines synaptic layer selection through proximity-based axon-target interactions. Taken together, we demonstrate that birth order dependent pre-patterning of afferent growth cones is an essential pre-requisite for the identification of synaptic partner neurons during visual map formation in Drosophila. DOI: http://dx.doi.org/10.7554/eLife.13715.001 PMID:26987017
Information-Driven Active Audio-Visual Source Localization
Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph
2015-01-01
We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619
Producing Curious Affects: Visual Methodology as an Affecting and Conflictual Wunderkammer
ERIC Educational Resources Information Center
Staunaes, Dorthe; Kofoed, Jette
2015-01-01
Digital video cameras, smartphones, internet and iPads are increasingly used as visual research methods with the purpose of creating an affective corpus of data. Such visual methods are often combined with interviews or observations. Not only are visual methods part of the used research methods, the visual products are used as requisites in…
How do visual and postural cues combine for self-tilt perception during slow pitch rotations?
Scotto Di Cesare, C; Buloup, F; Mestre, D R; Bringoux, L
2014-11-01
Self-orientation perception relies on the integration of multiple sensory inputs which convey spatially-related visual and postural cues. In the present study, an experimental set-up was used to tilt the body and/or the visual scene to investigate how these postural and visual cues are integrated for self-tilt perception (the subjective sensation of being tilted). Participants were required to repeatedly rate a confidence level for self-tilt perception during slow (0.05°·s(-1)) body and/or visual scene pitch tilts up to 19° relative to vertical. Concurrently, subjects also had to perform arm reaching movements toward a body-fixed target at certain specific angles of tilt. While performance of a concurrent motor task did not influence the main perceptual task, self-tilt detection did vary according to the visuo-postural stimuli. Slow forward or backward tilts of the visual scene alone did not induce a marked sensation of self-tilt contrary to actual body tilt. However, combined body and visual scene tilt influenced self-tilt perception more strongly, although this effect was dependent on the direction of visual scene tilt: only a forward visual scene tilt combined with a forward body tilt facilitated self-tilt detection. In such a case, visual scene tilt did not seem to induce vection but rather may have produced a deviation of the perceived orientation of the longitudinal body axis in the forward direction, which may have lowered the self-tilt detection threshold during actual forward body tilt. Copyright © 2014 Elsevier B.V. All rights reserved.
Spectral discrimination in color blind animals via chromatic aberration and pupil shape.
Stubbs, Alexander L; Stubbs, Christopher W
2016-07-19
We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide "color-blind" animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. We quantitatively show, through numerical simulations, how chromatic aberration can be exploited to obtain spectral information, especially through nonaxial pupils that are characteristic of coleoid cephalopods. We have also assessed the inherent ambiguity between range and color that is a consequence of the chromatic variation of best focus with wavelength. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.
A novel bFGF-GH injection therapy for two patients with severe ischemic limb pain.
Ito, Naomi; Saito, Shigeru; Yamada, Makiko Hardy; Koizuka, Shiro; Obata, Hideaki; Nishikawa, Koichi; Tabata, Yasuhiko
2008-01-01
Severe ischemic pain is difficult to treat with a single therapy. Although modern angiogenic therapies have been used in patients with peripheral arterial occlusive diseases, a regimen combining novel angiogenic therapy and classic nerve blocks, including sympathectomy, has not been discussed to date. In this case report, we present two patients with peripheral arterial occlusive disease who were first treated with medication and lumbar sympathectomy, and then with a novel gelatin hydrogel drug-delivery system loaded with basic fibroblast growth factor. The gelatin hydrogel combined with recombinant basic fibroblast growth factor was injected intramuscularly into the ischemic limbs. In the first patient, with arteriosclerosis obliterans, a foot ulcer was healed, and the original score for resting pain (visual analogue scale, 5/10) was decreased to 0/10. In the second patient, with Buerger's disease, a large toe ulcer was healed, and his resting pain (visual analogue scale, 8/10) was decreased to 1/10. Some other parameters, such as skin surface temperature, transcutaneous oxygen partial pressure, and pain-free walking distance, were also improved in both patients after the combined therapy. A multimodal approach is necessary to treat severe ischemic pain. Novel angiogenic therapy combined with nerve blocks seems to be a promising option in patients with severe pain.
Mastoidectomy simulation with combined visual and haptic feedback.
Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Zanetti, Gianluigi; Zorcolo, Antonio; John, Nigel W; Stone, Robert J
2002-01-01
Mastoidectomy is one of the most common surgical procedures relating to the petrous bone. In this paper we describe our preliminary results in the realization of a virtual reality mastoidectomy simulator. Our system is designed to work on patient-specific volumetric object models directly derived from 3D CT and MRI images. The paper summarizes the detailed task analysis performed in order to define the system requirements, introduces the architecture of the prototype simulator, and discusses the initial feedback received from selected end users.
Keshner, E.A.; Dhaher, Y.
2008-01-01
Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29–31 years) and 3 visually sensitive (27–57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a 3-dimensional model of joint motion11 was developed to examine gross head motion in 3 planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field can modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms. PMID:18162402
SU-E-J-192: Comparative Effect of Different Respiratory Motion Management Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakajima, Y; Kadoya, N; Ito, K
Purpose: Irregular breathing can influence the outcome of four-dimensional computed tomography imaging for causing artifacts. Audio-visual biofeedback systems associated with patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches), representing simpler visual coaching techniques without guiding waveform are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching to reduce respiratory irregularities by comparing two respiratory management systems. Methods: We collected data from eleven healthy volunteers. Bar and wave models were used as audio-visual biofeedback systems. Abches consisted of a respiratorymore » indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. Results: All coaching techniques improved respiratory variation, compared to free breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86, and 0.98 ± 0.47 mm for free breathing, Abches, bar model, and wave model, respectively. Free breathing and wave model differed significantly (p < 0.05). Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18, and 0.17 ± 0.05 s for free breathing, Abches, bar model, and wave model, respectively. Free breathing and all coaching techniques differed significantly (p < 0.05). For variation in both displacement and period, wave model was superior to free breathing, bar model, and Abches. The average reduction in displacement and period RMSE compared with wave model were 27% and 47%, respectively. Conclusion: The efficacy of audio-visual biofeedback to reduce respiratory irregularity compared with Abches. Our results showed that audio-visual biofeedback combined with a wave model can potentially provide clinical benefits in respiratory management, although all techniques could reduce respiratory irregularities.« less
Research on optimal DEM cell size for 3D visualization of loess terraces
NASA Astrophysics Data System (ADS)
Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei
2009-10-01
In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.
A dual-channel fusion system of visual and infrared images based on color transfer
NASA Astrophysics Data System (ADS)
Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong
2013-09-01
A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.
Neural substrates of smoking cue reactivity: A meta-analysis of fMRI studies
Engelmann, Jeffrey M.; Versace, Francesco; Robinson, Jason D.; Minnix, Jennifer A.; Lam, Cho Y.; Cui, Yong; Brown, Victoria L.; Cinciripini, Paul M.
2012-01-01
Reactivity to smoking-related cues may be an important factor that precipitates relapse in smokers who are trying to quit. The neurobiology of smoking cue reactivity has been investigated in several fMRI studies. We combined the results of these studies using activation likelihood estimation, a meta-analytic technique for fMRI data. Results of the meta-analysis indicated that smoking cues reliably evoke larger fMRI responses than neutral cues in the extended visual system, precuneus, posterior cingulate gyrus, anterior cingulate gyrus, dorsal and medial prefrontal cortex, insula, and dorsal striatum. Subtraction meta-analyses revealed that parts of the extended visual system and dorsal prefrontal cortex are more reliably responsive to smoking cues in deprived smokers than in non-deprived smokers, and that short-duration cues presented in event-related designs produce larger responses in the extended visual system than long-duration cues presented in blocked designs. The areas that were found to be responsive to smoking cues agree with theories of the neurobiology of cue reactivity, with two exceptions. First, there was a reliable cue reactivity effect in the precuneus, which is not typically considered a brain region important to addiction. Second, we found no significant effect in the nucleus accumbens, an area that plays a critical role in addiction, but this effect may have been due to technical difficulties associated with measuring fMRI data in that region. The results of this meta-analysis suggest that the extended visual system should receive more attention in future studies of smoking cue reactivity. PMID:22206965
Short-term saccadic adaptation in the macaque monkey: a binocular mechanism
Schultz, K. P.
2013-01-01
Saccadic eye movements are rapid transfers of gaze between objects of interest. Their duration is too short for the visual system to be able to follow their progress in time. Adaptive mechanisms constantly recalibrate the saccadic responses by detecting how close the landings are to the selected targets. The double-step saccadic paradigm is a common method to simulate alterations in saccadic gain. While the subject is responding to a first target shift, a second shift is introduced in the middle of this movement, which masks it from visual detection. The error in landing introduced by the second shift is interpreted by the brain as an error in the programming of the initial response, with gradual gain changes aimed at compensating the apparent sensorimotor mismatch. A second shift applied dichoptically to only one eye introduces disconjugate landing errors between the two eyes. A monocular adaptive system would independently modify only the gain of the eye exposed to the second shift in order to reestablish binocular alignment. Our results support a binocular mechanism. A version-based saccadic adaptive process detects postsaccadic version errors and generates compensatory conjugate gain alterations. A vergence-based saccadic adaptive process detects postsaccadic disparity errors and generates corrective nonvisual disparity signals that are sent to the vergence system to regain binocularity. This results in striking dynamical similarities between visually driven combined saccade-vergence gaze transfers, where the disparity is given by the visual targets, and the double-step adaptive disconjugate responses, where an adaptive disparity signal is generated internally by the saccadic system. PMID:23076111
Image/video understanding systems based on network-symbolic models
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-03-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.
Analysis of landscape character for visual resource management
Paul F. Anderson
1979-01-01
Description, classification and delineation of visual landscape character are initial steps in developing visual resource management plans. Landscape characteristics identified as key factors in visual landscape analysis include land cover/land use and landform. Landscape types, which are combinations of landform and surface features, were delineated for management...
Spelling: Do the Eyes Have It?
ERIC Educational Resources Information Center
Westwood, Peter
2015-01-01
This paper explores the question of whether the ability to spell depends mainly on visual perception and visual imagery, or on other equally important auditory, cognitive, and motor processes. The writer examines the evidence suggesting that accurate spelling draws on a combination of visual processing, visual memory, phonological awareness,…