A navigation system for the visually impaired using colored navigation lines and RFID tags.
Seto, First Tatsuya
2009-01-01
In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. Our developed instrument consists of a navigation system and a map information system. These systems are installed on a white cane. Our navigation system can follow a colored navigation line that is set on the floor. In this system, a color sensor installed on the tip of a white cane senses the colored navigation line, and the system informs the visually impaired that he/she is walking along the navigation line by vibration. The color recognition system is controlled by a one-chip microprocessor and this system can discriminate 6 colored navigation lines. RFID tags and a receiver for these tags are used in the map information system. The RFID tags and the RFID tag receiver are also installed on a white cane. The receiver receives tag information and notifies map information to the user by mp3 formatted pre-recorded voice. Three normal subjects who were blindfolded with an eye mask were tested with this system. All of them were able to walk along the navigation line. The performance of the map information system was good. Therefore, our system will be extremely valuable in supporting the activities of the visually impaired.
The development of a white cane which navigates the visually impaired.
Shiizu, Yuriko; Hirahara, Yoshiaki; Yanashima, Kenji; Magatani, Kazushige
2007-01-01
In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. This system is composed of colored navigation lines, RFID tags and an intelligent white cane. In our system, some colored marking tapes are set on along the walking route. These lines are called navigation line. And also RFID tags are set on this line at each landmark point. The intelligent white cane can sense a color of navigation line and receive tag information. By vibration of white cane, the system informs the visually impaired that he/she is walking along the navigation line. At the landmark point, the system also notifies area information to him/her by pre-recorded voice. Ten normal subjects who were blind folded with an eye mask were tested with this system. All of them were able to walk along the navigation line. The performance of the area information system was good. Therefore, we have concluded that our system will be extremely valuable in supporting the activities of the visually impaired.
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
A navigation system for the visually impaired an intelligent white cane.
Fukasawa, A Jin; Magatani, Kazusihge
2012-01-01
In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. Our developed instrument consists of a navigation system and a map information system. These systems are installed on a white cane. Our navigation system can follow a colored navigation line that is set on the floor. In this system, a color sensor installed on the tip of a white cane, this sensor senses a color of navigation line and the system informs the visually impaired that he/she is walking along the navigation line by vibration. This color recognition system is controlled by a one-chip microprocessor. RFID tags and a receiver for these tags are used in the map information system. RFID tags are set on the colored navigation line. An antenna for RFID tags and a tag receiver are also installed on a white cane. The receiver receives the area information as a tag-number and notifies map information to the user by mp3 formatted pre-recorded voice. And now, we developed the direction identification technique. Using this technique, we can detect a user's walking direction. A triaxiality acceleration sensor is used in this system. Three normal subjects who were blindfolded with an eye mask were tested with our developed navigation system. All of them were able to walk along the navigation line perfectly. We think that the performance of the system is good. Therefore, our system will be extremely valuable in supporting the activities of the visually impaired.
Satellite Imagery Assisted Road-Based Visual Navigation System
NASA Astrophysics Data System (ADS)
Volkova, A.; Gibbens, P. W.
2016-06-01
There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used
Development of voice navigation system for the visually impaired by using IC tags.
Takatori, Norihiko; Nojima, Kengo; Matsumoto, Masashi; Yanashima, Kenji; Magatani, Kazushige
2006-01-01
There are about 300,000 visually impaired persons in Japan. Most of them are old persons and, cannot become skillful in using a white cane, even if they make effort to learn how to use a white cane. Therefore, some guiding system that supports the independent activities of the visually impaired are required. In this paper, we will describe about a developed white cane system that supports the independent walking of the visually impaired in the indoor space. This system is composed of colored navigation lines that include IC tags and an intelligent white cane that has a navigation computer. In our system colored navigation lines that are put on the floor of the target space from the start point to the destination and IC tags that are set at the landmark point are used for indication of the route to the destination. The white cane has a color sensor, an IC tag transceiver and a computer system that includes a voice processor. This white cane senses the navigation line that has target color by a color sensor. When a color sensor finds the target color, the white cane informs a white cane user that he/she is on the navigation line by vibration. So, only following this vibration, the user can reach the destination. However, at some landmark points, guidance is necessary. At these points, an IC tag is set under the navigation line. The cane makes communication with the tag and informs the user about the land mark pint by pre recorded voice. Ten normal subjects who were blindfolded were tested with our developed system. All of them could walk along navigation line. And the IC tag information system worked well. Therefore, we have concluded that our system will be a very valuable one to support activities of the visually impaired.
Image processing and applications based on visualizing navigation service
NASA Astrophysics Data System (ADS)
Hwang, Chyi-Wen
2015-07-01
When facing the "overabundant" of semantic web information, in this paper, the researcher proposes the hierarchical classification and visualizing RIA (Rich Internet Application) navigation system: Concept Map (CM) + Semantic Structure (SS) + the Knowledge on Demand (KOD) service. The aim of the Multimedia processing and empirical applications testing, was to investigating the utility and usability of this visualizing navigation strategy in web communication design, into whether it enables the user to retrieve and construct their personal knowledge or not. Furthermore, based on the segment markets theory in the Marketing model, to propose a User Interface (UI) classification strategy and formulate a set of hypermedia design principles for further UI strategy and e-learning resources in semantic web communication. These research findings: (1) Irrespective of whether the simple declarative knowledge or the complex declarative knowledge model is used, the "CM + SS + KOD navigation system" has a better cognition effect than the "Non CM + SS + KOD navigation system". However, for the" No web design experience user", the navigation system does not have an obvious cognition effect. (2) The essential of classification in semantic web communication design: Different groups of user have a diversity of preference needs and different cognitive styles in the CM + SS + KOD navigation system.
Navigation system for a mobile robot with a visual sensor using a fish-eye lens
NASA Astrophysics Data System (ADS)
Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu
1998-02-01
Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Ganz, Aura; Schafer, James; Gandhi, Siddhesh; Puleo, Elaine; Wilson, Carole; Robertson, Meg
2012-01-01
We introduce PERCEPT system, an indoor navigation system for the blind and visually impaired. PERCEPT will improve the quality of life and health of the visually impaired community by enabling independent living. Using PERCEPT, blind users will have independent access to public health facilities such as clinics, hospitals, and wellness centers. Access to healthcare facilities is crucial for this population due to the multiple health conditions that they face such as diabetes and its complications. PERCEPT system trials with 24 blind and visually impaired users in a multistory building show PERCEPT system effectiveness in providing appropriate navigation instructions to these users. The uniqueness of our system is that it is affordable and that its design follows orientation and mobility principles. We hope that PERCEPT will become a standard deployed in all indoor public spaces, especially in healthcare and wellness facilities. PMID:23316225
A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum
NASA Astrophysics Data System (ADS)
Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.
2013-11-01
3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.
An indoor navigation system for the visually impaired.
Guerrero, Luis A; Vasquez, Francisco; Ochoa, Sergio F
2012-01-01
Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.
Indoor Navigation by People with Visual Impairment Using a Digital Sign System
Legge, Gordon E.; Beckmann, Paul J.; Tjan, Bosco S.; Havey, Gary; Kramer, Kevin; Rolkosky, David; Gage, Rachel; Chen, Muzi; Puchakayala, Sravan; Rangarajan, Aravindhan
2013-01-01
There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects—blind, low vision, blindfolded sighted, and normally sighted controls—were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment. PMID:24116156
NASA Astrophysics Data System (ADS)
Bates, Lisa M.; Hanson, Dennis P.; Kall, Bruce A.; Meyer, Frederic B.; Robb, Richard A.
1998-06-01
An important clinical application of biomedical imaging and visualization techniques is provision of image guided neurosurgical planning and navigation techniques using interactive computer display systems in the operating room. Current systems provide interactive display of orthogonal images and 3D surface or volume renderings integrated with and guided by the location of a surgical probe. However, structures in the 'line-of-sight' path which lead to the surgical target cannot be directly visualized, presenting difficulty in obtaining full understanding of the 3D volumetric anatomic relationships necessary for effective neurosurgical navigation below the cortical surface. Complex vascular relationships and histologic boundaries like those found in artereovenous malformations (AVM's) also contribute to the difficulty in determining optimal approaches prior to actual surgical intervention. These difficulties demonstrate the need for interactive oblique imaging methods to provide 'line-of-sight' visualization. Capabilities for 'line-of- sight' interactive oblique sectioning are present in several current neurosurgical navigation systems. However, our implementation is novel, in that it utilizes a completely independent software toolkit, AVW (A Visualization Workshop) developed at the Mayo Biomedical Imaging Resource, integrated with a current neurosurgical navigation system, the COMPASS stereotactic system at Mayo Foundation. The toolkit is a comprehensive, C-callable imaging toolkit containing over 500 optimized imaging functions and structures. The powerful functionality and versatility of the AVW imaging toolkit provided facile integration and implementation of desired interactive oblique sectioning using a finite set of functions. The implementation of the AVW-based code resulted in higher-level functions for complete 'line-of-sight' visualization.
Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments
ERIC Educational Resources Information Center
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…
A Visual-Cue-Dependent Memory Circuit for Place Navigation.
Qin, Han; Fu, Ling; Hu, Bo; Liao, Xiang; Lu, Jian; He, Wenjing; Liang, Shanshan; Zhang, Kuan; Li, Ruijie; Yao, Jiwei; Yan, Junan; Chen, Hao; Jia, Hongbo; Zott, Benedikt; Konnerth, Arthur; Chen, Xiaowei
2018-06-05
The ability to remember and to navigate to safe places is necessary for survival. Place navigation is known to involve medial entorhinal cortex (MEC)-hippocampal connections. However, learning-dependent changes in neuronal activity in the distinct circuits remain unknown. Here, by using optic fiber photometry in freely behaving mice, we discovered the experience-dependent induction of a persistent-task-associated (PTA) activity. This PTA activity critically depends on learned visual cues and builds up selectively in the MEC layer II-dentate gyrus, but not in the MEC layer III-CA1 pathway, and its optogenetic suppression disrupts navigation to the target location. The findings suggest that the visual system, the MEC layer II, and the dentate gyrus are essential hubs of a memory circuit for visually guided navigation. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul
2016-02-01
The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.
An Indoor Navigation System for the Visually Impaired
Guerrero, Luis A.; Vasquez, Francisco; Ochoa, Sergio F.
2012-01-01
Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment. PMID:22969398
Design, Implementation and Evaluation of an Indoor Navigation System for Visually Impaired People
Martinez-Sala, Alejandro Santos; Losilla, Fernando; Sánchez-Aarnoutse, Juan Carlos; García-Haro, Joan
2015-01-01
Indoor navigation is a challenging task for visually impaired people. Although there are guidance systems available for such purposes, they have some drawbacks that hamper their direct application in real-life situations. These systems are either too complex, inaccurate, or require very special conditions (i.e., rare in everyday life) to operate. In this regard, Ultra-Wideband (UWB) technology has been shown to be effective for indoor positioning, providing a high level of accuracy and low installation complexity. This paper presents SUGAR, an indoor navigation system for visually impaired people which uses UWB for positioning, a spatial database of the environment for pathfinding through the application of the A* algorithm, and a guidance module. The interaction with the user takes place using acoustic signals and voice commands played through headphones. The suitability of the system for indoor navigation has been verified by means of a functional and usable prototype through a field test with a blind person. In addition, other tests have been conducted in order to show the accuracy of different relevant parts of the system. PMID:26703610
Visual navigation using edge curve matching for pinpoint planetary landing
NASA Astrophysics Data System (ADS)
Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei
2018-05-01
Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.
33 CFR 149.135 - What should be marked on the cargo transfer system alarm switch?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false What should be marked on the cargo transfer system alarm switch? 149.135 Section 149.135 Navigation and Navigable Waters COAST GUARD... switch? Each switch for activating an alarm, and each audio or visual device for signaling an alarm, must...
33 CFR 149.135 - What should be marked on the cargo transfer system alarm switch?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false What should be marked on the cargo transfer system alarm switch? 149.135 Section 149.135 Navigation and Navigable Waters COAST GUARD... switch? Each switch for activating an alarm, and each audio or visual device for signaling an alarm, must...
33 CFR 149.135 - What should be marked on the cargo transfer system alarm switch?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false What should be marked on the cargo transfer system alarm switch? 149.135 Section 149.135 Navigation and Navigable Waters COAST GUARD... switch? Each switch for activating an alarm, and each audio or visual device for signaling an alarm, must...
33 CFR 149.135 - What should be marked on the cargo transfer system alarm switch?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false What should be marked on the cargo transfer system alarm switch? 149.135 Section 149.135 Navigation and Navigable Waters COAST GUARD... switch? Each switch for activating an alarm, and each audio or visual device for signaling an alarm, must...
33 CFR 149.135 - What should be marked on the cargo transfer system alarm switch?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false What should be marked on the cargo transfer system alarm switch? 149.135 Section 149.135 Navigation and Navigable Waters COAST GUARD... switch? Each switch for activating an alarm, and each audio or visual device for signaling an alarm, must...
Vision for navigation: What can we learn from ants?
Graham, Paul; Philippides, Andrew
2017-09-01
The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Coding of navigational affordances in the human visual system
Epstein, Russell A.
2017-01-01
A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space. PMID:28416669
Visual orientation and navigation in nocturnal arthropods.
Warrant, Eric; Dacke, Marie
2010-01-01
With their highly sensitive visual systems, the arthropods have evolved a remarkable capacity to orient and navigate at night. Whereas some navigate under the open sky, and take full advantage of the celestial cues available there, others navigate in more difficult conditions, such as through the dense understory of a tropical rainforest. Four major classes of orientation are performed by arthropods at night, some of which involve true navigation (i.e. travel to a distant goal that lies beyond the range of direct sensory contact): (1) simple straight-line orientation, typically for escape purposes; (2) nightly short-distance movements relative to a shoreline, typically in the context of feeding; (3) long-distance nocturnal migration at high altitude in the quest to locate favorable feeding or breeding sites, and (4) nocturnal excursions to and from a fixed nest or food site (i.e. homing), a task that in most species involves path integration and/or the learning and recollection of visual landmarks. These four classes of orientation--and their visual basis--are reviewed here, with special emphasis given to the best-understood animal systems that are representative of each. 2010 S. Karger AG, Basel.
Vision and visual navigation in nocturnal insects.
Warrant, Eric; Dacke, Marie
2011-01-01
With their highly sensitive visual systems, nocturnal insects have evolved a remarkable capacity to discriminate colors, orient themselves using faint celestial cues, fly unimpeded through a complicated habitat, and navigate to and from a nest using learned visual landmarks. Even though the compound eyes of nocturnal insects are significantly more sensitive to light than those of their closely related diurnal relatives, their photoreceptors absorb photons at very low rates in dim light, even during demanding nocturnal visual tasks. To explain this apparent paradox, it is hypothesized that the necessary bridge between retinal signaling and visual behavior is a neural strategy of spatial and temporal summation at a higher level in the visual system. Exactly where in the visual system this summation takes place, and the nature of the neural circuitry that is involved, is currently unknown but provides a promising avenue for future research.
INSIGHT: RFID and Bluetooth enabled automated space for the blind and visually impaired.
Ganz, Aura; Gandhi, Siddhesh Rajan; Wilson, Carole; Mullett, Gary
2010-01-01
In this paper we introduce INSIGHT, an indoor location tracking and navigation system to help the blind and visually impaired to easily navigate to their chosen destination in a public building. INSIGHT makes use of RFID and Bluetooth technology deployed within the building to locate and track the users. The PDA based user device interacts with INSIGHT server and provides the user navigation instructions in an audio form. The proposed system provides multi-resolution localization of the users, facilitating the provision of accurate navigation instructions when the user is in the vicinity of the RFID tags as well as accommodating a PANIC button which provides navigation instructions when the user is anywhere in the building. Moreover, the system will continuously monitor the zone in which the user walks. This will enable the system to identify if the user is located in the wrong zone of the building which may not lead to the desired destination.
Evaluation of a technique to simplify area navigation and required navigation performance charts
DOT National Transportation Integrated Search
2013-06-30
Performance based navigation (PBN), an enabler for the Federal Aviation Administration's Next Generation Air Transportation System (NextGEN), supports the design of more precise flight procedures. However, these new procedures can be visually complex...
Navigation Constellation Design Using a Multi-Objective Genetic Algorithm
2015-03-26
programs. This specific tool not only offers high fidelity simulations, but it also offers the visual aid provided by STK . The ability to...MATLAB and STK . STK is a program that allows users to model, analyze, and visualize space systems. Users can create objects such as satellites and...position dilution of precision (PDOP) and system cost. This thesis utilized Satellite Tool Kit ( STK ) to calculate PDOP values of navigation
Huang, Meng; Barber, Sean Michael; Steele, William James; Boghani, Zain; Desai, Viren Rajendrakumar; Britz, Gavin Wayne; West, George Alexander; Trask, Todd Wilson; Holman, Paul Joseph
2018-06-01
Image-guided approaches to spinal instrumentation and interbody fusion have been widely popularized in the last decade [1-5]. Navigated pedicle screws are significantly less likely to breach [2, 3, 5, 6]. Navigation otherwise remains a point reference tool because the projection is off-axis to the surgeon's inline loupe or microscope view. The Synaptive robotic brightmatter drive videoexoscope monitor system represents a new paradigm for off-axis high-definition (HD) surgical visualization. It has many advantages over the traditional microscope and loupes, which have already been demonstrated in a cadaveric study [7]. An auxiliary, but powerful capability of this system is projection of a second, modifiable image in a split-screen configuration. We hypothesized that integration of both Medtronic and Synaptive platforms could permit the visualization of reconstructed navigation and surgical field images simultaneously. By utilizing navigated instruments, this configuration has the ability to support live image-guided surgery or real-time navigation (RTN). Medtronic O-arm/Stealth S7 navigation, MetRx, NavLock, and SureTrak spinal systems were implemented on a prone cadaveric specimen with a stream output to the Synaptive Display. Surgical visualization was provided using a Storz Image S1 platform and camera mounted to the Synaptive robotic brightmatter drive. We were able to successfully technically co-adapt both platforms. A minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) and an open pedicle subtraction osteotomy (PSO) were performed using a navigated high-speed drill under RTN. Disc Shaver and Trials under RTN were implemented on the MIS TLIF. The synergy of Synaptive HD videoexoscope robotic drive and Medtronic Stealth platforms allow for live image-guided surgery or real-time navigation (RTN). Off-axis projection also allows upright neutral cervical spine operative ergonomics for the surgeons and improved surgical team visualization and education compared to traditional means. This technique has the potential to augment existing minimally invasive and open approaches, but will require long-term outcome measurements for efficacy.
Sharma, Vinod; Simpson, Richard; Lopresti, Edmund; Schmeler, Mark
2010-01-01
Some individuals with disabilities are denied powered mobility because they lack the visual, motor, and/or cognitive skills required to safely operate a power wheelchair. The Drive-Safe System (DSS) is an add-on, distributed, shared-control navigation assistance system for power wheelchairs intended to provide safe and independent mobility to such individuals. The DSS is a human-machine system in which the user is responsible for high-level control of the wheelchair, such as choosing the destination, path planning, and basic navigation actions, while the DSS overrides unsafe maneuvers through autonomous collision avoidance, wall following, and door crossing. In this project, the DSS was clinically evaluated in a controlled laboratory with blindfolded, nondisabled individuals. Further, these individuals' performance with the DSS was compared with standard cane use for navigation assistance by people with visual impairments. Results indicate that compared with a cane, the DSS significantly reduced the number of collisions. Users rated the DSS favorably even though they took longer to navigate the same obstacle course than they would have using a standard long cane. Participants experienced less physical demand, effort, and frustration when using the DSS as compared with a cane. These findings suggest that the DSS can be a viable powered mobility solution for wheelchair users with visual impairments.
Moving in Dim Light: Behavioral and Visual Adaptations in Nocturnal Ants.
Narendra, Ajay; Kamhi, J Frances; Ogawa, Yuri
2017-11-01
Visual navigation is a benchmark information processing task that can be used to identify the consequence of being active in dim-light environments. Visual navigational information that animals use during the day includes celestial cues such as the sun or the pattern of polarized skylight and terrestrial cues such as the entire panorama, canopy pattern, or significant salient features in the landscape. At night, some of these navigational cues are either unavailable or are significantly dimmer or less conspicuous than during the day. Even under these circumstances, animals navigate between locations of importance. Ants are a tractable system for studying navigation during day and night because the fine scale movement of individual animals can be recorded in high spatial and temporal detail. Ant species range from being strictly diurnal, crepuscular, and nocturnal. In addition, a number of species have the ability to change from a day- to a night-active lifestyle owing to environmental demands. Ants also offer an opportunity to identify the evolution of sensory structures for discrete temporal niches not only between species but also within a single species. Their unique caste system with an exclusive pedestrian mode of locomotion in workers and an exclusive life on the wing in males allows us to disentangle sensory adaptations that cater for different lifestyles. In this article, we review the visual navigational abilities of nocturnal ants and identify the optical and physiological adaptations they have evolved for being efficient visual navigators in dim-light. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Communication and navigation equipment for... § 121.349 Communication and navigation equipment for operations under VFR over routes not navigated by... receiver providing visual and aural signals; and (iii) One ILS receiver; and (3) Any RNAV system used to...
Real-time visual mosaicking and navigation on the seafloor
NASA Astrophysics Data System (ADS)
Richmond, Kristof
Remote robotic exploration holds vast potential for gaining knowledge about extreme environments accessible to humans only with great difficulty. Robotic explorers have been sent to other solar system bodies, and on this planet into inaccessible areas such as caves and volcanoes. In fact, the largest unexplored land area on earth lies hidden in the airless cold and intense pressure of the ocean depths. Exploration in the oceans is further hindered by water's high absorption of electromagnetic radiation, which both inhibits remote sensing from the surface, and limits communications with the bottom. The Earth's oceans thus provide an attractive target for developing remote exploration capabilities. As a result, numerous robotic vehicles now routinely survey this environment, from remotely operated vehicles piloted over tethers from the surface to torpedo-shaped autonomous underwater vehicles surveying the mid-waters. However, these vehicles are limited in their ability to navigate relative to their environment. This limits their ability to return to sites with precision without the use of external navigation aids, and to maneuver near and interact with objects autonomously in the water and on the sea floor. The enabling of environment-relative positioning on fully autonomous underwater vehicles will greatly extend their power and utility for remote exploration in the furthest reaches of the Earth's waters---even under ice and under ground---and eventually in extraterrestrial liquid environments such as Europa's oceans. This thesis presents an operational, fielded system for visual navigation of underwater robotic vehicles in unexplored areas of the seafloor. The system does not depend on external sensing systems, using only instruments on board the vehicle. As an area is explored, a camera is used to capture images and a composite view, or visual mosaic, of the ocean bottom is created in real time. Side-to-side visual registration of images is combined with dead-reckoned navigation information in a framework allowing the creation and updating of large, locally consistent mosaics. These mosaics are used as maps in which the vehicle can navigate and localize itself with respect to points in the environment. The system achieves real-time performance in several ways. First, wherever possible, direct sensing of motion parameters is used in place of extracting them from visual data. Second, trajectories are chosen to enable a hierarchical search for side-to-side links which limits the amount of searching performed without sacrificing robustness. Finally, the map estimation is formulated as a sparse, linear information filter allowing rapid updating of large maps. The visual navigation enabled by the work in this thesis represents a new capability for remotely operated vehicles, and an enabling capability for a new generation of autonomous vehicles which explore and interact with remote, unknown and unstructured underwater environments. The real-time mosaic can be used on current tethered vehicles to create pilot aids and provide a vehicle user with situational awareness of the local environment and the position of the vehicle within it. For autonomous vehicles, the visual navigation system enables precise environment-relative positioning and mapping, without requiring external navigation systems, opening the way for ever-expanding autonomous exploration capabilities. The utility of this system was demonstrated in the field at sites of scientific interest using the ROVs Ventana and Tiburon operated by the Monterey Bay Aquarium Research Institute. A number of sites in and around Monterey Bay, California were mosaicked using the system, culminating in a complete imaging of the wreck site of the USS Macon , where real-time visual mosaics containing thousands of images were generated while navigating using only sensor systems on board the vehicle.
Precise visual navigation using multi-stereo vision and landmark matching
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh
2007-04-01
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
[Impairment of safety in navigation caused by alcohol: impact on visual function].
Grütters, G; Reichelt, J A; Ritz-Timme, S; Thome, M; Kaatsch, H J
2003-05-01
So far in Germany, no legally binding standards for blood alcohol concentration exist that prove an impairment of navigability. The aim of our interdisciplinary project was to obtain data in order to identify critical blood alcohol limits. In this context the visual system seems to be of decisive importance. 21 professional skippers underwent realistic navigational demands soberly and alcoholized in a sea traffic simulator. The following parameters were considered: visual acuity, stereopsis, color vision, and accommodation. Under the influence of alcohol (average blood alcohol concentration: 1.08 per thousand ) each skipper considered himself to be completely capable of navigating. While simulations were running, all of the skippers made nautical mistakes or underestimated dangerous situations. Severe impairment in visual acuity or binocular function were not observed. Accommodation decreased by an average of 18% ( p=0.0001). In the test of color vision skippers made more mistakes ( p=0.017) and the time needed for this test was prolonged ( p=0.004). Changes in visual function as well as vegetative and psychological reactions could be the cause of mistakes and alcohol should therefore be regarded as a severe risk factor for security in sea navigation.
Navigation Assistance: A Trade-Off between Wayfinding Support and Configural Learning Support
ERIC Educational Resources Information Center
Munzer, Stefan; Zimmer, Hubert D.; Baus, Jorg
2012-01-01
Current GPS-based mobile navigation assistance systems support wayfinding, but they do not support learning about the spatial configuration of an environment. The present study examined effects of visual presentation modes for navigation assistance on wayfinding accuracy, route learning, and configural learning. Participants (high-school students)…
Towards automated visual flexible endoscope navigation.
van der Stap, Nanda; van der Heijden, Ferdinand; Broeders, Ivo A M J
2013-10-01
The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research. A systematic literature search was performed using three general search terms in two medical-technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included. Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date. Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process.
The effects of link format and screen location on visual search of web pages.
Ling, Jonathan; Van Schaik, Paul
2004-06-22
Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.
NASA Astrophysics Data System (ADS)
Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.
2007-03-01
We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.
Measurement of electromagnetic tracking error in a navigated breast surgery setup
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor
2016-03-01
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
Field evaluation of a wearable multimodal soldier navigation system.
Aaltonen, Iina; Laarni, Jari
2017-09-01
Challenging environments pose difficulties for terrain navigation, and therefore wearable and multimodal navigation systems have been proposed to overcome these difficulties. Few such navigation systems, however, have been evaluated in field conditions. We evaluated how a multimodal system can aid in navigating in a forest in the context of a military exercise. The system included a head-mounted display, headphones, and a tactile vibrating vest. Visual, auditory, and tactile modalities were tested and evaluated using unimodal, bimodal, and trimodal conditions. Questionnaires, interviews and observations were used to evaluate the advantages and disadvantages of each modality and their multimodal use. The guidance was considered easy to interpret and helpful in navigation. Simplicity of the displayed information was required, which was partially conflicting with the request for having both distance and directional information available. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shape Perception and Navigation in Blind Adults
Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara
2017-01-01
Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226
Visual Odometry for Autonomous Deep-Space Navigation
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.
Development of the navigation system for visually impaired.
Harada, Tetsuya; Kaneko, Yuki; Hirahara, Yoshiaki; Yanashima, Kenji; Magatani, Kazushige
2004-01-01
A white cane is a typical support instrument for the visually impaired. They use a white cane for the detection of obstacles while walking. So, the area where they have a mental map, they can walk using white cane without the help of others. However, they cannot walk independently in the unknown area, even if they use a white cane. Because, a white cane is a detecting device for obstacles and not a navigation device for their correct route. Now, we are developing the navigation system for the visually impaired which uses indoor space. In Japan, sometimes colored guide lines to the destination is used for a normal person. These lines are attached on the floor, we can reach the destination, if we walk along one of these line. In our system, a developed new white cane senses one colored guide line, and make notice to an user by vibration. This system recognizes the line of the color stuck on the floor by the optical sensor attached in the white cane. And in order to guide still more smoothly, infrared beacons (optical beacon), which can perform voice guidance, are also used.
Synergies in Astrometry: Predicting Navigational Error of Visual Binary Stars
NASA Astrophysics Data System (ADS)
Gessner Stewart, Susan
2015-08-01
Celestial navigation can employ a number of bright stars which are in binary systems. Often these are unresolved, appearing as a single, center-of-light object. A number of these systems are, however, in wide systems which could introduce a margin of error in the navigation solution if not handled properly. To illustrate the importance of good orbital solutions for binary systems - as well as good astrometry in general - the relationship between the center-of-light versus individual catalog position of celestial bodies and the error in terrestrial position derived via celestial navigation is demonstrated. From the list of navigational binary stars, fourteen such binary systems with at least 3.0 arcseconds apparent separation are explored. Maximum navigational error is estimated under the assumption that the bright star in the pair is observed at maximum separation, but the center-of-light is employed in the navigational solution. The relationships between navigational error and separation, orbital periods, and observers' latitude are discussed.
NASA Technical Reports Server (NTRS)
Tunstel, E.; Howard, A.; Edwards, D.; Carlson, A.
2001-01-01
This paper presents a technique for learning to assess terrain traversability for outdoor mobile robot navigation using human-embedded logic and real-time perception of terrain features extracted from image data.
NASA Astrophysics Data System (ADS)
Jakubovic, Raphael; Gupta, Shuarya; Guha, Daipayan; Mainprize, Todd; Yang, Victor X. D.
2017-02-01
Cranial neurosurgical procedures are especially delicate considering that the surgeon must localize the subsurface anatomy with limited exposure and without the ability to see beyond the surface of the surgical field. Surgical accuracy is imperative as even minor surgical errors can cause major neurological deficits. Traditionally surgical precision was highly dependent on surgical skill. However, the introduction of intraoperative surgical navigation has shifted the paradigm to become the current standard of care for cranial neurosurgery. Intra-operative image guided navigation systems are currently used to allow the surgeon to visualize the three-dimensional subsurface anatomy using pre-acquired computed tomography (CT) or magnetic resonance (MR) images. The patient anatomy is fused to the pre-acquired images using various registration techniques and surgical tools are typically localized using optical tracking methods. Although these techniques positively impact complication rates, surgical accuracy is limited by the accuracy of the navigation system and as such quantification of surgical error is required. While many different measures of registration accuracy have been presented true navigation accuracy can only be quantified post-operatively by comparing a ground truth landmark to the intra-operative visualization. In this study we quantified the accuracy of cranial neurosurgical procedures using a novel optical surface imaging navigation system to visualize the three-dimensional anatomy of the surface anatomy. A tracked probe was placed on the screws of cranial fixation plates during surgery and the reported position of the centre of the screw was compared to the co-ordinates of the post-operative CT or MR images, thus quantifying cranial neurosurgical error.
Navigation and Image Injection for Control of Bone Removal and Osteotomy Planes in Spine Surgery.
Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven Rainer; Archavlis, Elefterios; Giese, Alf
2017-04-01
In contrast to cranial interventions, neuronavigation in spinal surgery is used in few applications, not tapping into its full technological potential. We have developed a method to preoperatively create virtual resection planes and volumes for spinal osteotomies and export 3-D operation plans to a navigation system controlling intraoperative visualization using a surgical microscope's head-up display. The method was developed using a Sawbone ® model of the lumbar spine, demonstrating feasibility with high precision. Computer tomographic and magnetic resonance image data were imported into Amira ® , a 3-D visualization software. Resection planes were positioned, and resection volumes representing intraoperative bone removal were defined. Fused to the original Digital Imaging and Communications in Medicine data, the osteotomy planes were exported to the cranial version of a Brainlab ® navigation system. A navigated surgical microscope with video connection to the navigation system allowed intraoperative image injection to visualize the preplanned resection planes. The workflow was applied to a patient presenting with a congenital hemivertebra of the thoracolumbar spine. Dorsal instrumentation with pedicle screws and rods was followed by resection of the deformed vertebra guided by the in-view image injection of the preplanned resection planes into the optical path of a surgical microscope. Postoperatively, the patient showed no neurological deficits, and the spine was found to be restored in near physiological posture. The intraoperative visualization of resection planes in a microscope's head-up display was found to assist the surgeon during the resection of a complex-shaped bone wedge and may help to further increase accuracy and patient safety. Copyright © 2017 by the Congress of Neurological Surgeons
Anisotropy of Human Horizontal and Vertical Navigation in Real Space: Behavioral and PET Correlates.
Zwergal, Andreas; Schöberl, Florian; Xiong, Guoming; Pradhan, Cauchy; Covic, Aleksandar; Werner, Philipp; Trapp, Christoph; Bartenstein, Peter; la Fougère, Christian; Jahn, Klaus; Dieterich, Marianne; Brandt, Thomas
2016-10-17
Spatial orientation was tested during a horizontal and vertical real navigation task in humans. Video tracking of eye movements was used to analyse the behavioral strategy and combined with simultaneous measurements of brain activation and metabolism ([18F]-FDG-PET). Spatial navigation performance was significantly better during horizontal navigation. Horizontal navigation was predominantly visually and landmark-guided. PET measurements indicated that glucose metabolism increased in the right hippocampus, bilateral retrosplenial cortex, and pontine tegmentum during horizontal navigation. In contrast, vertical navigation was less reliant on visual and landmark information. In PET, vertical navigation activated the bilateral hippocampus and insula. Direct comparison revealed a relative activation in the pontine tegmentum and visual cortical areas during horizontal navigation and in the flocculus, insula, and anterior cingulate cortex during vertical navigation. In conclusion, these data indicate a functional anisotropy of human 3D-navigation in favor of the horizontal plane. There are common brain areas for both forms of navigation (hippocampus) as well as unique areas such as the retrosplenial cortex, visual cortex (horizontal navigation), flocculus, and vestibular multisensory cortex (vertical navigation). Visually guided landmark recognition seems to be more important for horizontal navigation, while distance estimation based on vestibular input might be more relevant for vertical navigation. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Visual landmarks facilitate rodent spatial navigation in virtual reality environments
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484
Pavlova, Marina; Sokolov, Alexander; Krägeloh-Mann, Ingeborg
2007-02-01
Visual navigation in familiar and unfamiliar surroundings is an essential ingredient of adaptive daily life behavior. Recent brain imaging work helps to recognize that establishing connectivity between brain regions is of importance for successful navigation. Here, we ask whether the ability to navigate is impaired in adolescents who were born premature and suffer congenital bilateral periventricular brain damage that might affect the pathways interconnecting subcortical structures with cortex. Performance on a set of visual labyrinth tasks was significantly worse in patients with periventricular leukomalacia (PVL) as compared with premature-born controls without lesions and term-born adolescents. The ability for visual navigation inversely relates to the severity of motor disability, leg-dominated bilateral spastic cerebral palsy. This agrees with the view that navigation ability substantially improves with practice and might be compromised in individuals with restrictions in active spatial exploration. Visual navigation is negatively linked to the volumetric extent of lesions over the right parietal and frontal periventricular regions. Whereas impairments of visual processing of point-light biological motion are associated in patients with PVL with bilateral parietal periventricular lesions, navigation ability is specifically linked to the frontal lesions in the right hemisphere. We suggest that more anterior periventricular lesions impair the interrelations between the right hippocampus and cortical areas leading to disintegration of neural networks engaged in visual navigation. For the first time, we show that the severity of right frontal periventricular damage and leg-dominated motor disorders can serve as independent predictors of the visual navigation disability.
Indoor magnetic navigation for the blind.
Riehle, Timothy H; Anderson, Shane M; Lichter, Patrick A; Giudice, Nicholas A; Sheikh, Suneel I; Knuesel, Robert J; Kollmann, Daniel T; Hedin, Daniel S
2012-01-01
Indoor navigation technology is needed to support seamless mobility for the visually impaired. This paper describes the construction of and evaluation of a navigation system that infers the users' location using only magnetic sensing. It is well known that the environments within steel frame structures are subject to significant magnetic distortions. Many of these distortions are persistent and have sufficient strength and spatial characteristics to allow their use as the basis for a location technology. This paper describes the development and evaluation of a prototype magnetic navigation system consisting of a wireless magnetometer placed at the users' hip streaming magnetic readings to a smartphone processing location algorithms. Human trials were conducted to assess the efficacy of the system by studying route-following performance with blind and sighted subjects using the navigation system for real-time guidance.
Enhancing Autonomy of Aerial Systems Via Integration of Visual Sensors into Their Avionics Suite
2016-09-01
aerial platform for subsequent visual sensor integration. 14. SUBJECT TERMS autonomous system, quadrotors, direct method, inverse ...CONTROLLER ARCHITECTURE .....................................................43 B. INVERSE DYNAMICS IN THE VIRTUAL DOMAIN ......................45 1...control station GPS Global-Positioning System IDVD inverse dynamics in the virtual domain ILP integer linear program INS inertial-navigation system
Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation
Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin
2014-01-01
Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780
Feng, Guohu; Wu, Wenqi; Wang, Jinling
2012-01-01
A matrix Kalman filter (MKF) has been implemented for an integrated navigation system using visual/inertial/magnetic sensors. The MKF rearranges the original nonlinear process model in a pseudo-linear process model. We employ the observability rank criterion based on Lie derivatives to verify the conditions under which the nonlinear system is observable. It has been proved that such observability conditions are: (a) at least one degree of rotational freedom is excited, and (b) at least two linearly independent horizontal lines and one vertical line are observed. Experimental results have validated the correctness of these observability conditions. PMID:23012523
Occlusion-free animation of driving routes for car navigation systems.
Takahashi, Shigeo; Yoshida, Kenichi; Shimada, Kenji; Nishita, Tomoyuki
2006-01-01
This paper presents a method for occlusion-free animation of geographical landmarks, and its application to a new type of car navigation system in which driving routes of interest are always visible. This is achieved by animating a nonperspective image where geographical landmarks such as mountain tops and roads are rendered as if they are seen from different viewpoints. The technical contribution of this paper lies in formulating the nonperspective terrain navigation as an inverse problem of continuously deforming a 3D terrain surface from the 2D screen arrangement of its associated geographical landmarks. The present approach provides a perceptually reasonable compromise between the navigation clarity and visual realism where the corresponding nonperspective view is fully augmented by assigning appropriate textures and shading effects to the terrain surface according to its geometry. An eye tracking experiment is conducted to prove that the present approach actually exhibits visually-pleasing navigation frames while users can clearly recognize the shape of the driving route without occlusion, together with the spatial configuration of geographical landmarks in its neighborhood.
A novel platform for electromagnetic navigated ultrasound bronchoscopy (EBUS).
Sorger, Hanne; Hofstad, Erlend Fagertun; Amundsen, Tore; Langø, Thomas; Leira, Håkon Olav
2016-08-01
Endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA) of mediastinal lymph nodes is essential for lung cancer staging and distinction between curative and palliative treatment. Precise sampling is crucial. Navigation and multimodal imaging may improve the efficiency of EBUS-TBNA. We demonstrate a novel EBUS-TBNA navigation system in a dedicated airway phantom. Using a convex probe EBUS bronchoscope (CP-EBUS) with an integrated sensor for electromagnetic (EM) position tracking, we performed navigated CP-EBUS in a phantom. Preoperative computed tomography (CT) and real-time ultrasound (US) images were integrated into a navigation platform for EM navigated bronchoscopy. The coordinates of targets in CT and US volumes were registered in the navigation system, and the position deviation was calculated. The system visualized all tumor models and displayed their fused CT and US images in correct positions in the navigation system. Navigating the EBUS bronchoscope was fast and easy. Mean error observed between US and CT positions for 11 target lesions (37 measurements) was [Formula: see text] mm, maximum error was 5.9 mm. The feasibility of our novel navigated CP-EBUS system was successfully demonstrated. An EBUS navigation system is needed to meet future requirements of precise mediastinal lymph node mapping, and provides new opportunities for procedure documentation in EBUS-TBNA.
NASA Astrophysics Data System (ADS)
Hansen, Christian; Schlichting, Stefan; Zidowitz, Stephan; Köhn, Alexander; Hindennach, Milo; Kleemann, Markus; Peitgen, Heinz-Otto
2008-03-01
Tumor resections from the liver are complex surgical interventions. With recent planning software, risk analyses based on individual liver anatomy can be carried out preoperatively. However, additional tumors within the liver are frequently detected during oncological interventions using intraoperative ultrasound. These tumors are not visible in preoperative data and their existence may require changes to the resection strategy. We propose a novel method that allows an intraoperative risk analysis adaptation by merging newly detected tumors with a preoperative risk analysis. To determine the exact positions and sizes of these tumors we make use of a navigated ultrasound-system. A fast communication protocol enables our application to exchange crucial data with this navigation system during an intervention. A further motivation for our work is to improve the visual presentation of a moving ultrasound plane within a complex 3D planning model including vascular systems, tumors, and organ surfaces. In case the ultrasound plane is located inside the liver, occlusion of the ultrasound plane by the planning model is an inevitable problem for the applied visualization technique. Our system allows the surgeon to focus on the ultrasound image while perceiving context-relevant planning information. To improve orientation ability and distance perception, we include additional depth cues by applying new illustrative visualization algorithms. Preliminary evaluations confirm that in case of intraoperatively detected tumors a risk analysis adaptation is beneficial for precise liver surgery. Our new GPU-based visualization approach provides the surgeon with a simultaneous visualization of planning models and navigated 2D ultrasound data while minimizing occlusion problems.
Development of the navigation system for the visually impaired by using white cane.
Hirahara, Yoshiaki; Sakurai, Yusuke; Shiidu, Yuriko; Yanashima, Kenji; Magatani, Kazushige
2006-01-01
A white cane is a typical support instrument for the visually impaired. They use a white cane for the detection of obstacles while walking. So, the area where they have a mental map, they can walk using white cane without help of others. However, they cannot walk independently in the unknown area, even if they use a white cane. Because, a white cane is a detecting device for obstacles and not a navigation device for there correcting route. Now, we are developing the navigation system for the visually impaired which uses indoor space. In Japan, sometimes colored guide lines to the destination are used for a normal person. These lines are attached on the floor, we can reach the destination, if we walk along one of these line. In our system, a developed new white cane senses one colored guide line, and makes notice to a user by vibration. This system recognizes the color of the line stuck on the floor by the optical sensor attached in the white cane. And in order to guide still more smoothly, infrared beacons (optical beacon), which can perform voice guidance, are also used.
Proximal versus distal cue utilization in spatial navigation: the role of visual acuity?
Carman, Heidi M; Mactutus, Charles F
2002-09-01
Proximal versus distal cue use in the Morris water maze is a widely accepted strategy for the dissociation of various problems affecting spatial navigation in rats such as aging, head trauma, lesions, and pharmacological or hormonal agents. Of the limited number of ontogenetic rat studies conducted, the majority have approached the problem of preweanling spatial navigation through a similar proximal-distal dissociation. An implicit assumption among all of these studies has been that the animal's visual system is sufficient to permit robust spatial navigation. We challenged this assumption and have addressed the role of visual acuity in spatial navigation in the preweanling Fischer 344-N rat by training animals to locate a visible (proximal) or hidden (distal) platform using double or null extramaze cues within the testing environment. All pups demonstrated improved performance across training, but animals presented with a visible platform, regardless of extramaze cues, simultaneously reached asymptotic performance levels; animals presented with a hidden platform, dependent upon location of extramaze cues, differentially reached asymptotic performance levels. Probe trial performance, defined by quadrant time and platform crossings, revealed that distal-double-cue pups demonstrated spatial navigational ability superior to that of the remaining groups. These results suggest that a pup's ability to spatially navigate a hidden platform is dependent on not only its response repertoire and task parameters, but also its visual acuity, as determined by the extramaze cue location within the testing environment. The standard hidden versus visible platform dissociation may not be a satisfactory strategy for the control of potential sensory deficits.
Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori
2017-01-01
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from ‘driver-lost’ scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results. PMID:28809803
Ravankar, Abhijeet; Ravankar, Ankit A; Kobayashi, Yukinori; Emaru, Takanori
2017-08-15
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.
Scarfe, Amy C.; Moore, Brian C. J.; Pardhan, Shahina
2017-01-01
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound. PMID:28407000
Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina
2017-01-01
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.
2013-05-29
not necessarily express the views of and should not be attributed to ESA. 1 and visual navigation to maneuver autonomously to reduce the size of the...successful orbit and three-dimensional imaging of an RSO, using passive visual -only navigation and real-time near-optimal guidance. The mission design...Kit ( STK ) in the Earth-centered Earth-fixed (ECF) co- ordinate system, loaded to Simulink and transformed to the BFF for calculation of the SRP
Inertial Navigation System Standardized Software Development. Volume 1. Introduction and Summary
1976-06-01
the Loran receiver, the Tacan receiver, the Omega receiver, the satelite based instrumentation, the multimode radar, the star tracker and the visual...accelerometer scale factor, and the barometric altimeter bias. The accuracy (1o values) of typical navigation-aid measurements (other than satelite derived
Schwarz, Sebastian; Albert, Laurence; Wystrach, Antoine; Cheng, Ken
2011-03-15
Many animal species, including some social hymenoptera, use the visual system for navigation. Although the insect compound eyes have been well studied, less is known about the second visual system in some insects, the ocelli. Here we demonstrate navigational functions of the ocelli in the visually guided Australian desert ant Melophorus bagoti. These ants are known to rely on both visual landmark learning and path integration. We conducted experiments to reveal the role of ocelli in the perception and use of celestial compass information and landmark guidance. Ants with directional information from their path integration system were tested with covered compound eyes and open ocelli on an unfamiliar test field where only celestial compass cues were available for homing. These full-vector ants, using only their ocelli for visual information, oriented significantly towards the fictive nest on the test field, indicating the use of celestial compass information that is presumably based on polarised skylight, the sun's position or the colour gradient of the sky. Ants without any directional information from their path-integration system (zero-vector) were tested, also with covered compound eyes and open ocelli, on a familiar training field where they have to use the surrounding panorama to home. These ants failed to orient significantly in the homeward direction. Together, our results demonstrated that M. bagoti could perceive and process celestial compass information for directional orientation with their ocelli. In contrast, the ocelli do not seem to contribute to terrestrial landmark-based navigation in M. bagoti.
Sundvall, Erik; Nyström, Mikael; Forss, Mattias; Chen, Rong; Petersson, Håkan; Ahlfeldt, Hans
2007-01-01
This paper describes selected earlier approaches to graphically relating events to each other and to time; some new combinations are also suggested. These are then combined into a unified prototyping environment for visualization and navigation of electronic health records. Google Earth (GE) is used for handling display and interaction of clinical information stored using openEHR data structures and 'archetypes'. The strength of the approach comes from GE's sophisticated handling of detail levels, from coarse overviews to fine-grained details that has been combined with linear, polar and region-based views of clinical events related to time. The system should be easy to learn since all the visualization styles can use the same navigation. The structured and multifaceted approach to handling time that is possible with archetyped openEHR data lends itself well to visualizing and integration with openEHR components is provided in the environment.
Takao, Masaki; Nishii, Takashi; Sakai, Takashi; Sugano, Nobuhiko
2014-06-01
Anterior sacroiliac joint plate fixation for unstable pelvic ring fractures avoids soft tissue problems in the buttocks; however, the lumbosacral nerves lie in close proximity to the sacroiliac joint and may be injured during the procedure. A 49 year-old woman with a type C pelvic ring fracture was treated with an anterior sacroiliac plate using a computed tomography (CT)-three-dimensional (3D)-fluoroscopy matching navigation system, which visualized the lumbosacral nerves as well as the iliac and sacral bones. We used a flat panel detector 3D C-arm, which made it possible to superimpose our preoperative CT-based plan on the intra-operative 3D-fluoroscopic images. No postoperative complications were noted. Intra-operative lumbosacral nerve visualization using computer navigation was useful to recognize the 'at-risk' area for nerve injury during anterior sacroiliac plate fixation. Copyright © 2013 John Wiley & Sons, Ltd.
Liau, Ee Shan; Yen, Ya-Ping; Chen, Jun-An
2018-05-11
Spinal motor neurons (MNs) extend their axons to communicate with their innervating targets, thereby controlling movement and complex tasks in vertebrates. Thus, it is critical to uncover the molecular mechanisms of how motor axons navigate to, arborize, and innervate their peripheral muscle targets during development and degeneration. Although transgenic Hb9::GFP mouse lines have long served to visualize motor axon trajectories during embryonic development, detailed descriptions of the full spectrum of axon terminal arborization remain incomplete due to the pattern complexity and limitations of current optical microscopy. Here, we describe an improved protocol that combines light sheet fluorescence microscopy (LSFM) and robust image analysis to qualitatively and quantitatively visualize developing motor axons. This system can be easily adopted to cross genetic mutants or MN disease models with Hb9::GFP lines, revealing novel molecular mechanisms that lead to defects in motor axon navigation and arborization.
Vibrotactile Feedbacks System for Assisting the Physically Impaired Persons for Easy Navigation
NASA Astrophysics Data System (ADS)
Safa, M.; Geetha, G.; Elakkiya, U.; Saranya, D.
2018-04-01
NAYAN architecture is for a visually impaired person to help for navigation. As well known, all visually impaired people desperately requires special requirements even to access services like the public transportation. This prototype system is a portable device; it is so easy to carry in any conduction to travel through a familiar and unfamiliar environment. The system consists of GPS receiver and it can get NEMA data through the satellite and it is provided to user's Smartphone through Arduino board. This application uses two vibrotactile feedbacks that will be placed in the left and right shoulder for vibration feedback, which gives information about the current location. The ultrasonic sensor is used for obstacle detection which is found in front of the visually impaired person. The Bluetooth modules connected with Arduino board is to send information to the user's mobile phone which it receives from GPS.
The Shuttle Mission Simulator computer generated imagery
NASA Technical Reports Server (NTRS)
Henderson, T. H.
1984-01-01
Equipment available in the primary training facility for the Space Transportation System (STS) flight crews includes the Fixed Base Simulator, the Motion Base Simulator, the Spacelab Simulator, and the Guidance and Navigation Simulator. The Shuttle Mission Simulator (SMS) consists of the Fixed Base Simulator and the Motion Base Simulator. The SMS utilizes four visual Computer Generated Image (CGI) systems. The Motion Base Simulator has a forward crew station with six-degrees of freedom motion simulation. Operation of the Spacelab Simulator is planned for the spring of 1983. The Guidance and Navigation Simulator went into operation in 1982. Aspects of orbital visual simulation are discussed, taking into account the earth scene, payload simulation, the generation and display of 1079 stars, the simulation of sun glare, and Reaction Control System jet firing plumes. Attention is also given to landing site visual simulation, and night launch and landing simulation.
Soldier-worn augmented reality system for tactical icon visualization
NASA Astrophysics Data System (ADS)
Roberts, David; Menozzi, Alberico; Clipp, Brian; Russler, Patrick; Cook, James; Karl, Robert; Wenger, Eric; Church, William; Mauger, Jennifer; Volpe, Chris; Argenta, Chris; Wille, Mark; Snarski, Stephen; Sherrill, Todd; Lupo, Jasper; Hobson, Ross; Frahm, Jan-Michael; Heinly, Jared
2012-06-01
This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive 'heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier's view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina - Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°×30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-on-Target) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.
An Empirical Comparison of Visualization Tools To Assist Information Retrieval on the Web.
ERIC Educational Resources Information Center
Heo, Misook; Hirtle, Stephen C.
2001-01-01
Discusses problems with navigation in hypertext systems, including cognitive overload, and describes a study that tested information visualization techniques to see which best represented the underlying structure of Web space. Considers the effects of visualization techniques on user performance on information searching tasks and the effects of…
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.
A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor
Kanwal, Nadia; Bostanci, Erkan; Currie, Keith; Clark, Adrian F.
2015-01-01
For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately. PMID:27057135
Synthetic vision in the cockpit: 3D systems for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth
2001-08-01
Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.
Image navigation as a means to expand the boundaries of fluorescence-guided surgery
NASA Astrophysics Data System (ADS)
Brouwer, Oscar R.; Buckle, Tessa; Bunschoten, Anton; Kuil, Joeri; Vahrmeijer, Alexander L.; Wendler, Thomas; Valdés-Olmos, Renato A.; van der Poel, Henk G.; van Leeuwen, Fijs W. B.
2012-05-01
Hybrid tracers that are both radioactive and fluorescent help extend the use of fluorescence-guided surgery to deeper structures. Such hybrid tracers facilitate preoperative surgical planning using (3D) scintigraphic images and enable synchronous intraoperative radio- and fluorescence guidance. Nevertheless, we previously found that improved orientation during laparoscopic surgery remains desirable. Here we illustrate how intraoperative navigation based on optical tracking of a fluorescence endoscope may help further improve the accuracy of hybrid surgical guidance. After feeding SPECT/CT images with an optical fiducial as a reference target to the navigation system, optical tracking could be used to position the tip of the fluorescence endoscope relative to the preoperative 3D imaging data. This hybrid navigation approach allowed us to accurately identify marker seeds in a phantom setup. The multispectral nature of the fluorescence endoscope enabled stepwise visualization of the two clinically approved fluorescent dyes, fluorescein and indocyanine green. In addition, the approach was used to navigate toward the prostate in a patient undergoing robot-assisted prostatectomy. Navigation of the tracked fluorescence endoscope toward the target identified on SPECT/CT resulted in real-time gradual visualization of the fluorescent signal in the prostate, thus providing an intraoperative confirmation of the navigation accuracy.
A 2D virtual reality system for visual goal-driven navigation in zebrafish larvae
Jouary, Adrien; Haudrechy, Mathieu; Candelier, Raphaël; Sumbre, German
2016-01-01
Animals continuously rely on sensory feedback to adjust motor commands. In order to study the role of visual feedback in goal-driven navigation, we developed a 2D visual virtual reality system for zebrafish larvae. The visual feedback can be set to be similar to what the animal experiences in natural conditions. Alternatively, modification of the visual feedback can be used to study how the brain adapts to perturbations. For this purpose, we first generated a library of free-swimming behaviors from which we learned the relationship between the trajectory of the larva and the shape of its tail. Then, we used this technique to infer the intended displacements of head-fixed larvae, and updated the visual environment accordingly. Under these conditions, larvae were capable of aligning and swimming in the direction of a whole-field moving stimulus and produced the fine changes in orientation and position required to capture virtual prey. We demonstrate the sensitivity of larvae to visual feedback by updating the visual world in real-time or only at the end of the discrete swimming episodes. This visual feedback perturbation caused impaired performance of prey-capture behavior, suggesting that larvae rely on continuous visual feedback during swimming. PMID:27659496
Visual Navigation in Nocturnal Insects.
Warrant, Eric; Dacke, Marie
2016-05-01
Despite their tiny eyes and brains, nocturnal insects have evolved a remarkable capacity to visually navigate at night. Whereas some use moonlight or the stars as celestial compass cues to maintain a straight-line course, others use visual landmarks to navigate to and from their nest. These impressive abilities rely on highly sensitive compound eyes and specialized visual processing strategies in the brain. ©2016 Int. Union Physiol. Sci./Am. Physiol. Soc.
Visual map and instruction-based bicycle navigation: a comparison of effects on behaviour.
de Waard, Dick; Westerhuis, Frank; Joling, Danielle; Weiland, Stella; Stadtbäumer, Ronja; Kaltofen, Leonie
2017-09-01
Cycling with a classic paper map was compared with navigating with a moving map displayed on a smartphone, and with auditory, and visual turn-by-turn route guidance. Spatial skills were found to be related to navigation performance, however only when navigating from a paper or electronic map, not with turn-by-turn (instruction based) navigation. While navigating, 25% of the time cyclists fixated at the devices that present visual information. Navigating from a paper map required most mental effort and both young and older cyclists preferred electronic over paper map navigation. In particular a turn-by-turn dedicated guidance device was favoured. Visual maps are in particular useful for cyclists with higher spatial skills. Turn-by-turn information is used by all cyclists, and it is useful to make these directions available in all devices. Practitioner Summary: Electronic navigation devices are preferred over a paper map. People with lower spatial skills benefit most from turn-by-turn guidance information, presented either auditory or on a dedicated device. People with higher spatial skills perform well with all devices. It is advised to keep in mind that all users benefit from turn-by-turn information when developing a navigation device for cyclists.
NASA Astrophysics Data System (ADS)
Zheng, Guoyan
2007-03-01
Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
Evidence for discrete landmark use by pigeons during homing.
Mora, Cordula V; Ross, Jeremy D; Gorsevski, Peter V; Chowdhury, Budhaditya; Bingman, Verner P
2012-10-01
Considerable efforts have been made to investigate how homing pigeons (Columba livia f. domestica) are able to return to their loft from distant, unfamiliar sites while the mechanisms underlying navigation in familiar territory have received less attention. With the recent advent of global positioning system (GPS) data loggers small enough to be carried by pigeons, the role of visual environmental features in guiding navigation over familiar areas is beginning to be understood, yet, surprisingly, we still know very little about whether homing pigeons can rely on discrete, visual landmarks to guide navigation. To assess a possible role of discrete, visual landmarks in navigation, homing pigeons were first trained to home from a site with four wind turbines as salient landmarks as well as from a control site without any distinctive, discrete landmark features. The GPS-recorded flight paths of the pigeons on the last training release were straighter and more similar among birds from the turbine site compared with those from the control site. The pigeons were then released from both sites following a clock-shift manipulation. Vanishing bearings from the turbine site continued to be homeward oriented as 13 of 14 pigeons returned home. By contrast, at the control site the vanishing bearings were deflected in the expected clock-shift direction and only 5 of 13 pigeons returned home. Taken together, our results offer the first strong evidence that discrete, visual landmarks are one source of spatial information homing pigeons can utilize to navigate when flying over a familiar area.
Neuroendovascular magnetic navigation: clinical experience in ten patients.
Dabus, Guilherme; Gerstle, Ronald J; Cross, Dewitte T; Derdeyn, Colin P; Moran, Christopher J
2007-04-01
The magnetic navigation system consists of an externally generated magnetic field that is used to control and steer a magnetically tipped microguidewire. The goal of this study was to demonstrate that the use of the magnetic navigation system and its magnetic microguidewire is feasible and safe in all types of neuroendovascular procedures. A magnetic navigation system is an interventional workstation that combines a biplanar fluoroscopy system with a computer-controlled magnetic field generator to provide both visualization and control of a magnetically activated endovascular microguidewire. Ten consecutive patients underwent a variety of neuroendovascular procedures using the magnetic guidance system and magnetic microguidewire. All patients presented with a neurovascular disease that was suitable for endovascular treatment. Multiple different devices and embolic agents were used. Of the ten patients, three were male and seven female. Their mean age was 53.9 years. The predominant neurovascular condition was the presence of intracranial aneurysm (nine patients). One patient had a left mandibular arteriovenous malformation. All treatments were successfully performed on the magnetic navigation system suite. The magnetic navigation system and the magnetic microguidewire allowed safe and accurate endovascular navigation allowing placement of the microcatheters in the desired location. There were no neurological complications or death in our series. The use of the magnetic navigation system and the magnetic microguidewire in the endovascular treatment of patients with neurovascular diseases is feasible and safe.
Oral and maxillofacial surgery with computer-assisted navigation system.
Kawachi, Homare; Kawachi, Yasuyuki; Ikeda, Chihaya; Takagi, Ryo; Katakura, Akira; Shibahara, Takahiko
2010-01-01
Intraoperative computer-assisted navigation has gained acceptance in maxillofacial surgery with applications in an increasing number of indications. We adapted a commercially available wireless passive marker system which allows calibration and tracking of virtually every instrument in maxillofacial surgery. Virtual computer-generated anatomical structures are displayed intraoperatively in a semi-immersive head-up display. Continuous observation of the operating field facilitated by computer assistance enables surgical navigation in accordance with the physician's preoperative plans. This case report documents the potential for augmented visualization concepts in surgical resection of tumors in the oral and maxillofacial region. We report a case of T3N2bM0 carcinoma of the maxillary gingival which was surgically resected with the assistance of the Stryker Navigation Cart System. This system was found to be useful in assisting preoperative planning and intraoperative monitoring.
Wei, Peng-Hu; Cong, Fei; Chen, Ge; Li, Ming-Chu; Yu, Xin-Guang; Bao, Yu-Hai
2017-02-01
Diffusion tensor imaging-based navigation is unable to resolve crossing fibers or to determine with accuracy the fanning, origin, and termination of fibers. It is important to improve the accuracy of localizing white matter fibers for improved surgical approaches. We propose a solution to this problem using navigation based on track density imaging extracted from high-definition fiber tractography (HDFT). A 28-year-old asymptomatic female patient with a left-lateral ventricle meningioma was enrolled in the present study. Language and visual tests, magnetic resonance imaging findings, both preoperative and postoperative HDFT, and the intraoperative navigation and surgery process are presented. Track density images were extracted from tracts derived using full q-space (514 directions) diffusion spectrum imaging (DSI) and integrated into a neuronavigation system. Navigation accuracy was verified via intraoperative records and postoperative DSI tractography, as well as a functional examination. DSI successfully represented the shape and range of the Meyer loop and arcuate fasciculus. Extracted track density images from the DSI were successfully integrated into the navigation system. The relationship between the operation channel and surrounding tracts was consistent with the postoperative findings, and the patient was functionally intact after the surgery. DSI-based TDI navigation allows for the visualization of anatomic features such as fanning and angling and helps to identify the range of a given tract. Moreover, our results show that our HDFT navigation method is a promising technique that preserves neural function. Copyright © 2016 Elsevier Inc. All rights reserved.
Human Factors Engineering #3 Crewstation Assessment for the OH-58F Helicopter
2014-03-01
Additionally, workload was assessed for level of interoperability 2 (LOI 2) tasks that the aircrew performed with an unmanned aircraft system (UAS...TTP tactics, techniques, and procedures UAS unmanned aircraft system 47 VFR visual flight rules VMC visual meteorological conditions VTR...For example, pilots often perform navigation tasks, communicate via multiple radios, monitor aircraft systems , and assist the pilot on the controls
Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne
2016-03-01
One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers' visual and manual distractions with 'infotainment' technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual-manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox 'one-shot' voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory-vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers' interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation.
A motorized ultrasound system for MRI-ultrasound fusion guided prostatectomy
NASA Astrophysics Data System (ADS)
Seifabadi, Reza; Xu, Sheng; Pinto, Peter; Wood, Bradford J.
2016-03-01
Purpose: This study presents MoTRUS, a motorized transrectal ultrasound system, to enable remote navigation of a transrectal ultrasound (TRUS) probe during da Vinci assisted prostatectomy. MoTRUS not only provides a stable platform to the ultrasound probe, but also allows the physician to navigate it remotely while sitting on the da Vinci console. This study also presents phantom feasibility study with the goal being intraoperative MRI-US image fusion capability to bring preoperative MR images to the operating room for the best visualization of the gland, boundaries, nerves, etc. Method: A two degree-of-freedom probe holder is developed to insert and rotate a bi-plane transrectal ultrasound transducer. A custom joystick is made to enable remote navigation of MoTRUS. Safety features have been considered to avoid inadvertent risks (if any) to the patient. Custom design software has been developed to fuse pre-operative MR images to intraoperative ultrasound images acquired by MoTRUS. Results: Remote TRUS probe navigation was evaluated on a patient after taking required consents during prostatectomy using MoTRUS. It took 10 min to setup the system in OR. MoTRUS provided similar capability in addition to remote navigation and stable imaging. No complications were observed. Image fusion was evaluated on a commercial prostate phantom. Electromagnetic tracking was used for the fusion. Conclusions: Motorized navigation of the TRUS probe during prostatectomy is safe and feasible. Remote navigation provides physician with a more precise and easier control of the ultrasound image while removing the burden of manual manipulation of the probe. Image fusion improved visualization of the prostate and boundaries in a phantom study.
Visual Navigation during Colony Emigration by the Ant Temnothorax rugatulus
Bowens, Sean R.; Glatt, Daniel P.; Pratt, Stephen C.
2013-01-01
Many ants rely on both visual cues and self-generated chemical signals for navigation, but their relative importance varies across species and context. We evaluated the roles of both modalities during colony emigration by Temnothorax rugatulus. Colonies were induced to move from an old nest in the center of an arena to a new nest at the arena edge. In the midst of the emigration the arena floor was rotated 60°around the old nest entrance, thus displacing any substrate-bound odor cues while leaving visual cues unchanged. This manipulation had no effect on orientation, suggesting little influence of substrate cues on navigation. When this rotation was accompanied by the blocking of most visual cues, the ants became highly disoriented, suggesting that they did not fall back on substrate cues even when deprived of visual information. Finally, when the substrate was left in place but the visual surround was rotated, the ants' subsequent headings were strongly rotated in the same direction, showing a clear role for visual navigation. Combined with earlier studies, these results suggest that chemical signals deposited by Temnothorax ants serve more for marking of familiar territory than for orientation. The ants instead navigate visually, showing the importance of this modality even for species with small eyes and coarse visual acuity. PMID:23671713
Zhang, Lelin; Chi, Yu Mike; Edelstein, Eve; Schulze, Jurgen; Gramann, Klaus; Velasquez, Alvaro; Cauwenberghs, Gert; Macagno, Eduardo
2010-01-01
Wireless physiological/neurological monitoring in virtual reality (VR) offers a unique opportunity for unobtrusively quantifying human responses to precisely controlled and readily modulated VR representations of health care environments. Here we present such a wireless, light-weight head-mounted system for measuring electrooculogram (EOG) and electroencephalogram (EEG) activity in human subjects interacting with and navigating in the Calit2 StarCAVE, a five-sided immersive 3-D visualization VR environment. The system can be easily expanded to include other measurements, such as cardiac activity and galvanic skin responses. We demonstrate the capacity of the system to track focus of gaze in 3-D and report a novel calibration procedure for estimating eye movements from responses to the presentation of a set of dynamic visual cues in the StarCAVE. We discuss cyber and clinical applications that include a 3-D cursor for visual navigation in VR interactive environments, and the monitoring of neurological and ocular dysfunction in vision/attention disorders.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1990-01-01
A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.
Neuronal connectome of a sensory-motor circuit for visual navigation
Randel, Nadine; Asadulina, Albina; Bezares-Calderón, Luis A; Verasztó, Csaba; Williams, Elizabeth A; Conzelmann, Markus; Shahidi, Réza; Jékely, Gáspár
2014-01-01
Animals use spatial differences in environmental light levels for visual navigation; however, how light inputs are translated into coordinated motor outputs remains poorly understood. Here we reconstruct the neuronal connectome of a four-eye visual circuit in the larva of the annelid Platynereis using serial-section transmission electron microscopy. In this 71-neuron circuit, photoreceptors connect via three layers of interneurons to motorneurons, which innervate trunk muscles. By combining eye ablations with behavioral experiments, we show that the circuit compares light on either side of the body and stimulates body bending upon left-right light imbalance during visual phototaxis. We also identified an interneuron motif that enhances sensitivity to different light intensity contrasts. The Platynereis eye circuit has the hallmarks of a visual system, including spatial light detection and contrast modulation, illustrating how image-forming eyes may have evolved via intermediate stages contrasting only a light and a dark field during a simple visual task. DOI: http://dx.doi.org/10.7554/eLife.02730.001 PMID:24867217
Self-motivated visual scanning predicts flexible navigation in a virtual environment.
Ploran, Elisabeth J; Bevitt, Jacob; Oshiro, Jaris; Parasuraman, Raja; Thompson, James C
2014-01-01
The ability to navigate flexibly (e.g., reorienting oneself based on distal landmarks to reach a learned target from a new position) may rely on visual scanning during both initial experiences with the environment and subsequent test trials. Reliance on visual scanning during navigation harkens back to the concept of vicarious trial and error, a description of the side-to-side head movements made by rats as they explore previously traversed sections of a maze in an attempt to find a reward. In the current study, we examined if visual scanning predicted the extent to which participants would navigate to a learned location in a virtual environment defined by its position relative to distal landmarks. Our results demonstrated a significant positive relationship between the amount of visual scanning and participant accuracy in identifying the trained target location from a new starting position as long as the landmarks within the environment remain consistent with the period of original learning. Our findings indicate that active visual scanning of the environment is a deliberative attentional strategy that supports the formation of spatial representations for flexible navigation.
Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne
2016-01-01
Abstract One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers’ visual and manual distractions with ‘infotainment’ technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual–manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox ‘one-shot’ voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory–vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers’ interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation. PMID:26269281
33 CFR 175.130 - Visual distress signals accepted.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Visual distress signals accepted... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.130 Visual distress signals accepted. (a) Any of the following signals, when carried in the number required, can be used to meet the...
33 CFR 175.130 - Visual distress signals accepted.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Visual distress signals accepted... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.130 Visual distress signals accepted. (a) Any of the following signals, when carried in the number required, can be used to meet the...
33 CFR 175.130 - Visual distress signals accepted.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Visual distress signals accepted... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.130 Visual distress signals accepted. (a) Any of the following signals, when carried in the number required, can be used to meet the...
Educational Process Navigator as Means of Creation of Individual Educational Path of a Student
ERIC Educational Resources Information Center
Khuziakhmetov, Anvar N.; Sytina, Nadezhda S.
2016-01-01
Rationale of the problem stated in the article is caused by search for new alternative models for individual educational paths of students in the continuous multi-level education system on the basis of the navigators of the educational process, being a visual matrix of individual educational space. The purpose of the article is to develop the…
Stated Preferences for Components of a Personal Guidance System for Nonvisual Navigation
ERIC Educational Resources Information Center
Golledge, Reginald G.; Marston, James R.; Loomis, Jack M.; Klatzky, Roberta L.
2004-01-01
This article reports on a survey of the preferences of visually impaired persons for a possible personal navigation device. The results showed that the majority of participants preferred speech input and output interfaces, were willing to use such a product, thought that they would make more trips with such a device, and had some concerns about…
Method of mobile robot indoor navigation by artificial landmarks with use of computer vision
NASA Astrophysics Data System (ADS)
Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.
2018-05-01
The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.
Mitsuhashi, Shota; Akamatsu, Yasushi; Kobayashi, Hideo; Kusayama, Yoshihiro; Kumagai, Ken; Saito, Tomoyuki
2018-02-01
Rotational malpositioning of the tibial component can lead to poor functional outcome in TKA. Although various surgical techniques have been proposed, precise rotational placement of the tibial component was difficult to accomplish even with the use of a navigation system. The purpose of this study is to assess whether combined CT-based and image-free navigation systems replicate accurately the rotational alignment of tibial component that was preoperatively planned on CT, compared with the conventional method. We compared the number of outliers for rotational alignment of the tibial component using combined CT-based and image-free navigation systems (navigated group) with those of conventional method (conventional group). Seventy-two TKAs were performed between May 2012 and December 2014. In the navigated group, the anteroposterior axis was prepared using CT-based navigation system and the tibial component was positioned under control of the navigation. In the conventional group, the tibial component was placed with reference to the Akagi line that was determined visually. Fisher's exact probability test was performed to evaluate the results. There was a significant difference between the two groups with regard to the number of outliers: 3 outliers in the navigated group compared with 12 outliers in the conventional group (P < 0.01). We concluded that combined CT-based and image-free navigation systems decreased the number of rotational outliers of tibial component, and was helpful for the replication of the accurate rotational alignment of the tibial component that was preoperatively planned.
NASA Technical Reports Server (NTRS)
Bergeron, H. P.; Haynie, A. T.; Mcdede, J. B.
1980-01-01
A general aviation single pilot instrument flight rule simulation capability was developed. Problems experienced by single pilots flying in IFR conditions were investigated. The simulation required a three dimensional spatial navaid environment of a flight navigational area. A computer simulation of all the navigational aids plus 12 selected airports located in the Washington/Norfolk area was developed. All programmed locations in the list were referenced to a Cartesian coordinate system with the origin located at a specified airport's reference point. All navigational aids with their associated frequencies, call letters, locations, and orientations plus runways and true headings are included in the data base. The simulation included a TV displayed out-the-window visual scene of country and suburban terrain and a scaled model runway complex. Any of the programmed runways, with all its associated navaids, can be referenced to a runway on the airport in this visual scene. This allows a simulation of a full mission scenario including breakout and landing.
Mursch, K; Gotthardt, T; Kröger, R; Bublat, M; Behnke-Mursch, J
2005-08-01
We evaluated an advanced concept for patient-based navigation during minimally invasive neurosurgical procedures. An infrared-based, off-line neuro-navigation system (LOCALITE, Bonn, Germany) was applied during operations within a 0.5 T intraoperative MRI scanner (iMRI) (Signa SF, GE Medical Systems, Milwaukee, WI, USA) in addition to the conventional real-time system. The three-dimensional (3D) data set was acquired intraoperatively and up-dated when brain-shift was suspected. Twenty-three patients with subcortical lesions were operated upon with the aim to minimise the operative trauma. Small craniotomies (median diameter 30 mm, mean diameter 27 mm) could be placed exactly. In all cases, the primary goal of the operation (total resection or biopsy) was achieved in a straightforward procedure without permanent morbidity. The navigation system could be easily used without technical problems. In contrast to the real-time navigation mode of the MR system, the higher quality as well as the real-time display of the MR images reconstructed from the 3D reference data provided sufficient visual-manual coordination. The system combines the advantages of conventional neuro-navigation with the ability to adapt intraoperatively to the continuously changing anatomy. Thus, small and/or deep lesions can be operated upon in straightforward minimally invasive operations.
An Unmanned Aerial Vehicle Cluster Network Cruise System for Monitor
NASA Astrophysics Data System (ADS)
Jiang, Jirong; Tao, Jinpeng; Xin, Guipeng
2018-06-01
The existing maritime cruising system mainly uses manned motorboats to monitor the quality of coastal water and patrol and maintenance of the navigation -aiding facility, which has the problems of high energy consumption, small range of cruise for monitoring, insufficient information control and low visualization. In recent years, the application of UAS in the maritime field has alleviated the phenomenon above to some extent. A cluster-based unmanned network monitoring cruise system designed in this project uses the floating small UAV self-powered launching platform as a carrier, applys the idea of cluster, and combines the strong controllability of the multi-rotor UAV and the capability to carry customized modules, constituting a unmanned, visualized and normalized monitoring cruise network to realize the functions of maritime cruise, maintenance of navigational-aiding and monitoring the quality of coastal water.
Assessment of feedback modalities for wearable visual aids in blind mobility
Sorrentino, Paige; Bohlool, Shadi; Zhang, Carey; Arditti, Mort; Goodrich, Gregory; Weiland, James D.
2017-01-01
Sensory substitution devices engage sensory modalities other than vision to communicate information typically obtained through the sense of sight. In this paper, we examine the ability of subjects who are blind to follow simple verbal and vibrotactile commands that allow them to navigate a complex path. A total of eleven visually impaired subjects were enrolled in the study. Prototype systems were developed to deliver verbal and vibrotactile commands to allow an investigator to guide a subject through a course. Using this mode, subjects could follow commands easily and navigate significantly faster than with their cane alone (p <0.05). The feedback modes were similar with respect to the increased speed for course completion. Subjects rated usability of the feedback systems as “above average” with scores of 76.3 and 90.9 on the system usability scale. PMID:28182731
33 CFR 175.110 - Visual distress signals required.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...
33 CFR 175.110 - Visual distress signals required.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...
33 CFR 175.110 - Visual distress signals required.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...
33 CFR 175.110 - Visual distress signals required.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...
Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel Robert; Namer-Furstenberg, Rinat; Amedi, Amir
2014-01-01
Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.
PointCom: semi-autonomous UGV control with intuitive interface
NASA Astrophysics Data System (ADS)
Rohde, Mitchell M.; Perlin, Victor E.; Iagnemma, Karl D.; Lupa, Robert M.; Rohde, Steven M.; Overholt, James; Fiorani, Graham
2008-04-01
Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI) techniques from the entertainment software industry are being used to develop video-game style interfaces that require little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less burdensome than many current generation systems. mand).
Exomars VisLoc- The Visual Localisation System for the Exomars Rover
NASA Astrophysics Data System (ADS)
Ward, R.; Hamilton, W.; Silva, N.; Pereira, V.
2016-08-01
Maintaining accurate knowledge of the current position of vehicles on the surface of Mars is a considerable problem. The lack of an orbital GPS means that the absolute position of a rover at any instant is very difficult to determine, and with that it is difficult to accurately and safely plan hazard avoidance manoeuvres.Some on-board methods of determining the evolving POSE of a rover are well known, such as using wheel odometry to keep a log of the distance travelled. However there are associated problems - wheels can slip in the martial soil providing odometry readings which can mislead navigation algorithms. One solution to this is to use a visual localisation system, which uses cameras to determine the actual rover motion from images of the terrain. By measuring movement from the terrain an independent measure of the actual movement can be obtained to a high degree of accuracy.This paper presents the progress of the project to develop a the Visual Localisation system for the ExoMars rover (VisLoc). The core algorithmm used in the system is known as OVO (Oxford Visual Odometry), developed at the Mobile Robotics Group at the University of Oxford. Over a number of projects this system has been adapted from its original purpose (navigation systems for autonomous vehicles) to be a viable system for the unique challenges associated with extra-terrestrial use.
Intelligent Behavioral Action Aiding for Improved Autonomous Image Navigation
2012-09-13
odometry, SICK laser scanning unit ( Lidar ), Inertial Measurement Unit (IMU) and ultrasonic distance measurement system (Figure 32). The Lidar , IMU...2010, July) GPS world. [Online]. http://www.gpsworld.com/tech-talk- blog/gnss-independent-navigation-solution-using-integrated- lidar -data-11378 [4...Milford, David McKinnon, Michael Warren, Gordon Wyeth, and Ben Upcroft, "Feature-based Visual Odometry and Featureless Place Recognition for SLAM in
Liao, Pin-Chao; Sun, Xinlu; Liu, Mei; Shih, Yu-Nien
2018-01-11
Navigated safety inspection based on task-specific checklists can increase the hazard detection rate, theoretically with interference from scene complexity. Visual clutter, a proxy of scene complexity, can theoretically impair visual search performance, but its impact on the effect of safety inspection performance remains to be explored for the optimization of navigated inspection. This research aims to explore whether the relationship between working memory and hazard detection rate is moderated by visual clutter. Based on a perceptive model of hazard detection, we: (a) developed a mathematical influence model for construction hazard detection; (b) designed an experiment to observe the performance of hazard detection rate with adjusted working memory under different levels of visual clutter, while using an eye-tracking device to observe participants' visual search processes; (c) utilized logistic regression to analyze the developed model under various visual clutter. The effect of a strengthened working memory on the detection rate through increased search efficiency is more apparent in high visual clutter. This study confirms the role of visual clutter in construction-navigated inspections, thus serving as a foundation for the optimization of inspection planning.
Visualization and interaction tools for aerial photograph mosaics
NASA Astrophysics Data System (ADS)
Fernandes, João Pedro; Fonseca, Alexandra; Pereira, Luís; Faria, Adriano; Figueira, Helder; Henriques, Inês; Garção, Rita; Câmara, António
1997-05-01
This paper describes the development of a digital spatial library based on mosaics of digital orthophotos, called Interactive Portugal, that will enable users both to retrieve geospatial information existing in the Portuguese National System for Geographic Information World Wide Web server, and to develop local databases connected to the main system. A set of navigation, interaction, and visualization tools are proposed and discussed. They include sketching, dynamic sketching, and navigation capabilities over the digital orthophotos mosaics. Main applications of this digital spatial library are pointed out and discussed, namely for education, professional, and tourism markets. Future developments are considered. These developments are related to user reactions, technological advancements, and projects that also aim at delivering and exploring digital imagery on the World Wide Web. Future capabilities for site selection and change detection are also considered.
Modeling of pilot's visual behavior for low-level flight
NASA Astrophysics Data System (ADS)
Schulte, Axel; Onken, Reiner
1995-06-01
Developers of synthetic vision systems for low-level flight simulators deal with the problem to decide which features to incorporate in order to achieve most realistic training conditions. This paper supports an approach to this problem on the basis of modeling the pilot's visual behavior. This approach is founded upon the basic requirement that the pilot's mechanisms of visual perception should be identical in simulated and real low-level flight. Flight simulator experiments with pilots were conducted for knowledge acquisition. During the experiments video material of a real low-level flight mission containing different situations was displayed to the pilot who was acting under a realistic mission assignment in a laboratory environment. Pilot's eye movements could be measured during the replay. The visual mechanisms were divided into rule based strategies for visual navigation, based on the preflight planning process, as opposed to skill based processes. The paper results in a model of the pilot's planning strategy of a visual fixing routine as part of the navigation task. The model is a knowledge based system based upon the fuzzy evaluation of terrain features in order to determine the landmarks used by pilots. It can be shown that a computer implementation of the model selects those features, which were preferred by trained pilots, too.
Applicability of Deep-Learning Technology for Relative Object-Based Navigation
2017-09-01
burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing...possible selections for navigating an unmanned ground vehicle (UGV) is through real- time visual odometry. To navigate in such an environment, the UGV...UGV) is through real- time visual odometry. To navigate in such an environment, the UGV needs to be able to detect, identify, and relate the static
Insect navigation: do ants live in the now?
Graham, Paul; Mangan, Michael
2015-03-01
Visual navigation is a critical behaviour for many animals, and it has been particularly well studied in ants. Decades of ant navigation research have uncovered many ways in which efficient navigation can be implemented in small brains. For example, ants show us how visual information can drive navigation via procedural rather than map-like instructions. Two recent behavioural observations highlight interesting adaptive ways in which ants implement visual guidance. Firstly, it has been shown that the systematic nest searches of ants can be biased by recent experience of familiar scenes. Secondly, ants have been observed to show temporary periods of confusion when asked to repeat a route segment, even if that route segment is very familiar. Taken together, these results indicate that the navigational decisions of ants take into account their recent experiences as well as the currently perceived environment. © 2015. Published by The Company of Biologists Ltd.
Raut, Anant; Thapa, Poshan; Citrin, David; Schwarz, Ryan; Gauchan, Bikash; Bista, Deepak; Tamrakar, Bibhu; Halliday, Scott; Maru, Duncan; Schwarz, Dan
2015-12-01
Patient navigation programs have shown to be effective across multiple settings in guiding patients through the care delivery process. Limited experience and literature exist, however, for such programs in rural and resource-constrained environments. Patients living in such settings frequently have low health literacy and substantially lower social status than their providers. They typically have limited experiences interfacing with formalized healthcare systems, and, when they do, their experience can be unpleasant and confusing. At a district hospital in rural far-western Nepal, we designed and implemented a patient navigation system that aimed to improve patients' subjective care experience. First, we hired and trained a team of patient navigators who we recruited from the local area. Their responsibility is exclusively to demonstrate compassion and to guide patients through their care process. Second, we designed visual cues throughout our hospital complex to assist in navigating patients through the buildings. Third, we incorporated the patient navigators within the management and communications systems of the hospital care team, and established standard operating procedures. We describe here our experiences and challenges in designing and implementing a patient navigator program. Such patient-centered systems may be relevant at other facilities in Nepal and globally where patient health literacy is low, patients come from backgrounds of substantial marginalization and disempowerment, and patient experience with healthcare facilities is limited. Copyright © 2015 Elsevier Inc. All rights reserved.
Autonomous Deep-Space Optical Navigation Project
NASA Technical Reports Server (NTRS)
D'Souza, Christopher
2014-01-01
This project will advance the Autonomous Deep-space navigation capability applied to Autonomous Rendezvous and Docking (AR&D) Guidance, Navigation and Control (GNC) system by testing it on hardware, particularly in a flight processor, with a goal of limited testing in the Integrated Power, Avionics and Software (IPAS) with the ARCM (Asteroid Retrieval Crewed Mission) DRO (Distant Retrograde Orbit) Autonomous Rendezvous and Docking (AR&D) scenario. The technology, which will be harnessed, is called 'optical flow', also known as 'visual odometry'. It is being matured in the automotive and SLAM (Simultaneous Localization and Mapping) applications but has yet to be applied to spacecraft navigation. In light of the tremendous potential of this technique, we believe that NASA needs to design a optical navigation architecture that will use this technique. It is flexible enough to be applicable to navigating around planetary bodies, such as asteroids.
NASA Astrophysics Data System (ADS)
Welch, Sharon S.
Topics discussed in this volume include aircraft guidance and navigation, optics for visual guidance of aircraft, spacecraft and missile guidance and navigation, lidar and ladar systems, microdevices, gyroscopes, cockpit displays, and automotive displays. Papers are presented on optical processing for range and attitude determination, aircraft collision avoidance using a statistical decision theory, a scanning laser aircraft surveillance system for carrier flight operations, star sensor simulation for astroinertial guidance and navigation, autonomous millimeter-wave radar guidance systems, and a 1.32-micron long-range solid state imaging ladar. Attention is also given to a microfabricated magnetometer using Young's modulus changes in magnetoelastic materials, an integrated microgyroscope, a pulsed diode ring laser gyroscope, self-scanned polysilicon active-matrix liquid-crystal displays, the history and development of coated contrast enhancement filters for cockpit displays, and the effect of the display configuration on the attentional sampling performance. (For individual items see A93-28152 to A93-28176, A93-28178 to A93-28180)
Mert, Aygül; Kiesel, Barbara; Wöhrer, Adelheid; Martínez-Moreno, Mauricio; Minchev, Georgi; Furtner, Julia; Knosp, Engelbert; Wolfsberger, Stefan; Widhalm, Georg
2015-01-01
OBJECT Surgery of suspected low-grade gliomas (LGGs) poses a special challenge for neurosurgeons due to their diffusely infiltrative growth and histopathological heterogeneity. Consequently, neuronavigation with multimodality imaging data, such as structural and metabolic data, fiber tracking, and 3D brain visualization, has been proposed to optimize surgery. However, currently no standardized protocol has been established for multimodality imaging data in modern glioma surgery. The aim of this study was therefore to define a specific protocol for multimodality imaging and navigation for suspected LGG. METHODS Fifty-one patients who underwent surgery for a diffusely infiltrating glioma with nonsignificant contrast enhancement on MRI and available multimodality imaging data were included. In the first 40 patients with glioma, the authors retrospectively reviewed the imaging data, including structural MRI (contrast-enhanced T1-weighted, T2-weighted, and FLAIR sequences), metabolic images derived from PET, or MR spectroscopy chemical shift imaging, fiber tracking, and 3D brain surface/vessel visualization, to define standardized image settings and specific indications for each imaging modality. The feasibility and surgical relevance of this new protocol was subsequently prospectively investigated during surgery with the assistance of an advanced electromagnetic navigation system in the remaining 11 patients. Furthermore, specific surgical outcome parameters, including the extent of resection, histological analysis of the metabolic hotspot, presence of a new postoperative neurological deficit, and intraoperative accuracy of 3D brain visualization models, were assessed in each of these patients. RESULTS After reviewing these first 40 cases of glioma, the authors defined a specific protocol with standardized image settings and specific indications that allows for optimal and simultaneous visualization of structural and metabolic data, fiber tracking, and 3D brain visualization. This new protocol was feasible and was estimated to be surgically relevant during navigation-guided surgery in all 11 patients. According to the authors' predefined surgical outcome parameters, they observed a complete resection in all resectable gliomas (n = 5) by using contour visualization with T2-weighted or FLAIR images. Additionally, tumor tissue derived from the metabolic hotspot showed the presence of malignant tissue in all WHO Grade III or IV gliomas (n = 5). Moreover, no permanent postoperative neurological deficits occurred in any of these patients, and fiber tracking and/or intraoperative monitoring were applied during surgery in the vast majority of cases (n = 10). Furthermore, the authors found a significant intraoperative topographical correlation of 3D brain surface and vessel models with gyral anatomy and superficial vessels. Finally, real-time navigation with multimodality imaging data using the advanced electromagnetic navigation system was found to be useful for precise guidance to surgical targets, such as the tumor margin or the metabolic hotspot. CONCLUSIONS In this study, the authors defined a specific protocol for multimodality imaging data in suspected LGGs, and they propose the application of this new protocol for advanced navigation-guided procedures optimally in conjunction with continuous electromagnetic instrument tracking to optimize glioma surgery.
Keshavan, J; Gremillion, G; Escobar-Alvarez, H; Humbert, J S
2014-06-01
Safe, autonomous navigation by aerial microsystems in less-structured environments is a difficult challenge to overcome with current technology. This paper presents a novel visual-navigation approach that combines bioinspired wide-field processing of optic flow information with control-theoretic tools for synthesis of closed loop systems, resulting in robustness and performance guarantees. Structured singular value analysis is used to synthesize a dynamic controller that provides good tracking performance in uncertain environments without resorting to explicit pose estimation or extraction of a detailed environmental depth map. Experimental results with a quadrotor demonstrate the vehicle's robust obstacle-avoidance behaviour in a straight line corridor, an S-shaped corridor and a corridor with obstacles distributed in the vehicle's path. The computational efficiency and simplicity of the current approach offers a promising alternative to satisfying the payload, power and bandwidth constraints imposed by aerial microsystems.
Choi, Hyunseok; Cho, Byunghyun; Masamune, Ken; Hashizume, Makoto; Hong, Jaesung
2016-03-01
Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Rodrigues, Pedro L.; Moreira, António H. J.; Rodrigues, Nuno F.; Pinho, A. C. M.; Fonseca, Jaime C.; Lima, Estevão.; Vilaça, João. L.
2014-03-01
Background: Precise needle puncture of renal calyces is a challenging and essential step for successful percutaneous nephrolithotomy. This work tests and evaluates, through a clinical trial, a real-time navigation system to plan and guide percutaneous kidney puncture. Methods: A novel system, entitled i3DPuncture, was developed to aid surgeons in establishing the desired puncture site and the best virtual puncture trajectory, by gathering and processing data from a tracked needle with optical passive markers. In order to navigate and superimpose the needle to a preoperative volume, the patient, 3D image data and tracker system were previously registered intraoperatively using seven points that were strategically chosen based on rigid bone structures and nearby kidney area. In addition, relevant anatomical structures for surgical navigation were automatically segmented using a multi-organ segmentation algorithm that clusters volumes based on statistical properties and minimum description length criterion. For each cluster, a rendering transfer function enhanced the visualization of different organs and surrounding tissues. Results: One puncture attempt was sufficient to achieve a successful kidney puncture. The puncture took 265 seconds, and 32 seconds were necessary to plan the puncture trajectory. The virtual puncture path was followed correctively until the needle tip reached the desired kidney calyceal. Conclusions: This new solution provided spatial information regarding the needle inside the body and the possibility to visualize surrounding organs. It may offer a promising and innovative solution for percutaneous punctures.
Seamless positioning and navigation by using geo-referenced images and multi-sensor data.
Li, Xun; Wang, Jinling; Li, Tao
2013-07-12
Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments.
Seamless Positioning and Navigation by Using Geo-Referenced Images and Multi-Sensor Data
Li, Xun; Wang, Jinling; Li, Tao
2013-01-01
Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments. PMID:23857267
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-01-01
Objective Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today’s keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users’ information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. Materials and Methods The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively. Results The authors produced a prototype implementation of the proposed system, which is publicly accessible at https://patentq.njit.edu/oer. To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Conclusion Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. PMID:26335986
NASA Astrophysics Data System (ADS)
Rudolph, Tobias; Ebert, Lars; Kowal, Jens
2006-03-01
Supporting surgeons in performing minimally invasive surgeries can be considered as one of the major goals of computer assisted surgery. Excellent intraoperative visualization is a prerequisite to achieve this aim. The Siremobil Iso-C 3D has become a widely used imaging device, which, in combination with a navigation system, enables the surgeon to directly navigate within the acquired 3D image volume without any extra registration steps. However, the image quality is rather low compared to a CT scan and the volume size (approx. 12 cm 3) limits its application. A regularly used alternative in computer assisted orthopedic surgery is to use of a preoperatively acquired CT scan to visualize the operating field. But, the additional registration step, necessary in order to use CT stacks for navigation is quite invasive. Therefore the objective of this work is to develop a noninvasive registration technique. In this article a solution is being proposed that registers a preoperatively acquired CT scan to the intraoperatively acquired Iso-C 3D image volume, thereby registering the CT to the tracked anatomy. The procedure aligns both image volumes by maximizing the mutual information, an algorithm that has already been applied to similar registration problems and demonstrated good results. Furthermore the accuracy of such a registration method was investigated in a clinical setup, integrating a navigated Iso-C 3D in combination with an tracking system. Initial tests based on cadaveric animal bone resulted in an accuracy ranging from 0.63mm to 1.55mm mean error.
Visualization of LC-MS/MS proteomics data in MaxQuant.
Tyanova, Stefka; Temu, Tikira; Carlson, Arthur; Sinitcyn, Pavel; Mann, Matthias; Cox, Juergen
2015-04-01
Modern software platforms enable the analysis of shotgun proteomics data in an automated fashion resulting in high quality identification and quantification results. Additional understanding of the underlying data can be gained with the help of advanced visualization tools that allow for easy navigation through large LC-MS/MS datasets potentially consisting of terabytes of raw data. The updated MaxQuant version has a map navigation component that steers the users through mass and retention time-dependent mass spectrometric signals. It can be used to monitor a peptide feature used in label-free quantification over many LC-MS runs and visualize it with advanced 3D graphic models. An expert annotation system aids the interpretation of the MS/MS spectra used for the identification of these peptide features. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Audible vision for the blind and visually impaired in indoor open spaces.
Yu, Xunyi; Ganz, Aura
2012-01-01
In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.
NASA Technical Reports Server (NTRS)
Biegel, Bryan A. (Technical Monitor); Sandstrom, Timothy A.; Henze, Chris; Levit, Creon
2003-01-01
This paper presents the hyperwall, a visualization cluster that uses coordinated visualizations for interactive exploration of multidimensional data and simulations. The system strongly leverages the human eye-brain system with a generous 7x7 array offlat panel LCD screens powered by a beowulf clustel: With each screen backed by a workstation class PC, graphic and compute intensive applications can be applied to a broad range of data. Navigational tools are presented that allow for investigation of high dimensional spaces.
Zhu, Ming; Chai, Gang; Lin, Li; Xin, Yu; Tan, Andy; Bogari, Melia; Zhang, Yan; Li, Qingfeng
2016-12-01
Augmented reality (AR) technology can superimpose the virtual image generated by computer onto the real operating field to present an integral image to enhance surgical safety. The purpose of our study is to develop a novel AR-based navigation system for craniofacial surgery. We focus on orbital hypertelorism correction, because the surgery requires high preciseness and is considered tough even for senior craniofacial surgeon. Twelve patients with orbital hypertelorism were selected. The preoperative computed tomography data were imported into 3-dimensional platform for preoperational design. The position and orientation of virtual information and real world were adjusted by image registration process. The AR toolkits were used to realize the integral image. Afterward, computed tomography was also performed after operation for comparing the difference between preoperational plan and actual operational outcome. Our AR-based navigation system was successfully used in these patients, directly displaying 3-dimensional navigational information onto the surgical field. They all achieved a better appearance by the guidance of navigation image. The difference in interdacryon distance and the dacryon point of each side appear no significant (P > 0.05) between preoperational plan and actual surgical outcome. This study reports on an effective visualized approach for guiding orbital hypertelorism correction. Our AR-based navigation system may lay a foundation for craniofacial surgery navigation. The AR technology could be considered as a helpful tool for precise osteotomy in craniofacial surgery.
Navigation system for minimally invasive esophagectomy: experimental study in a porcine model.
Nickel, Felix; Kenngott, Hannes G; Neuhaus, Jochen; Sommer, Christof M; Gehrig, Tobias; Kolb, Armin; Gondan, Matthias; Radeleff, Boris A; Schaible, Anja; Meinzer, Hans-Peter; Gutt, Carsten N; Müller-Stich, Beat-Peter
2013-10-01
Navigation systems potentially facilitate minimally invasive esophagectomy and improve patient outcome by improving intraoperative orientation, position estimation of instruments, and identification of lymph nodes and resection margins. The authors' self-developed navigation system is highly accurate in static environments. This study aimed to test the overall accuracy of the navigation system in a realistic operating room scenario and to identify the different sources of error altering accuracy. To simulate a realistic environment, a porcine model (n = 5) was used with endoscopic clips in the esophagus as navigation targets. Computed tomography imaging was followed by image segmentation and target definition with the medical imaging interaction toolkit software. Optical tracking was used for registration and localization of animals and navigation instruments. Intraoperatively, the instrument was displayed relative to segmented organs in real time. The target registration error (TRE) of the navigation system was defined as the distance between the target and the navigation instrument tip. The TRE was measured on skin targets with the animal in the 0° supine and 25° anti-Trendelenburg position and on the esophagus during laparoscopic transhiatal preparation. On skin targets, the TRE was significantly higher in the 25° position, at 14.6 ± 2.7 mm, compared with the 0° position, at 3.2 ± 1.3 mm. The TRE on the esophagus was 11.2 ± 2.4 mm. The main source of error was soft tissue deformation caused by intraoperative positioning, pneumoperitoneum, surgical manipulation, and tissue dissection. The navigation system obtained acceptable accuracy with a minimally invasive transhiatal approach to the esophagus in a realistic experimental model. Thus the system has the potential to improve intraoperative orientation, identification of lymph nodes and adequate resection margins, and visualization of risk structures. Compensation methods for soft tissue deformation may lead to an even more accurate navigation system in the future.
Tangible interactive system for document browsing and visualisation of multimedia data
NASA Astrophysics Data System (ADS)
Rytsar, Yuriy; Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Deguillaume, Frederic; Topak, Emre; Startchik, Sergei; Pun, Thierry
2006-01-01
In this paper we introduce and develop a framework for document interactive navigation in multimodal databases. First, we analyze the main open issues of existing multimodal interfaces and then discuss two applications that include interaction with documents in several human environments, i.e., the so-called smart rooms. Second, we propose a system set-up dedicated to the efficient navigation in the printed documents. This set-up is based on the fusion of data from several modalities that include images and text. Both modalities can be used as cover data for hidden indexes using data-hiding technologies as well as source data for robust visual hashing. The particularities of the proposed robust visual hashing are described in the paper. Finally, we address two practical applications of smart rooms for tourism and education and demonstrate the advantages of the proposed solution.
A Markov chain model for image ranking system in social networks
NASA Astrophysics Data System (ADS)
Zin, Thi Thi; Tin, Pyke; Toriu, Takashi; Hama, Hiromitsu
2014-03-01
In today world, different kinds of networks such as social, technological, business and etc. exist. All of the networks are similar in terms of distributions, continuously growing and expanding in large scale. Among them, many social networks such as Facebook, Twitter, Flickr and many others provides a powerful abstraction of the structure and dynamics of diverse kinds of inter personal connection and interaction. Generally, the social network contents are created and consumed by the influences of all different social navigation paths that lead to the contents. Therefore, identifying important and user relevant refined structures such as visual information or communities become major factors in modern decision making world. Moreover, the traditional method of information ranking systems cannot be successful due to their lack of taking into account the properties of navigation paths driven by social connections. In this paper, we propose a novel image ranking system in social networks by using the social data relational graphs from social media platform jointly with visual data to improve the relevance between returned images and user intentions (i.e., social relevance). Specifically, we propose a Markov chain based Social-Visual Ranking algorithm by taking social relevance into account. By using some extensive experiments, we demonstrated the significant and effectiveness of the proposed social-visual ranking method.
NASA Astrophysics Data System (ADS)
Uijt de Haag, Maarten; Campbell, Jacob; van Graas, Frank
2005-05-01
Synthetic Vision Systems (SVS) provide pilots with a virtual visual depiction of the external environment. When using SVS for aircraft precision approach guidance systems accurate positioning relative to the runway with a high level of integrity is required. Precision approach guidance systems in use today require ground-based electronic navigation components with at least one installation at each airport, and in many cases multiple installations to service approaches to all qualifying runways. A terrain-referenced approach guidance system is envisioned to provide precision guidance to an aircraft without the use of ground-based electronic navigation components installed at the airport. This autonomy makes it a good candidate for integration with an SVS. At the Ohio University Avionics Engineering Center (AEC), work has been underway in the development of such a terrain referenced navigation system. When used in conjunction with an Inertial Measurement Unit (IMU) and a high accuracy/resolution terrain database, this terrain referenced navigation system can provide navigation and guidance information to the pilot on a SVS or conventional instruments. The terrain referenced navigation system, under development at AEC, operates on similar principles as other terrain navigation systems: a ground sensing sensor (in this case an airborne laser scanner) gathers range measurements to the terrain; this data is then matched in some fashion with an onboard terrain database to find the most likely position solution and used to update an inertial sensor-based navigator. AEC's system design differs from today's common terrain navigators in its use of a high resolution terrain database (~1 meter post spacing) in conjunction with an airborne laser scanner which is capable of providing tens of thousands independent terrain elevation measurements per second with centimeter-level accuracies. When combined with data from an inertial navigator the high resolution terrain database and laser scanner system is capable of providing near meter-level horizontal and vertical position estimates. Furthermore, the system under development capitalizes on 1) The position and integrity benefits provided by the Wide Area Augmentation System (WAAS) to reduce the initial search space size and; 2) The availability of high accuracy/resolution databases. This paper presents results from flight tests where the terrain reference navigator is used to provide guidance cues for a precision approach.
Dendroscope: An interactive viewer for large phylogenetic trees
Huson, Daniel H; Richter, Daniel C; Rausch, Christian; Dezulian, Tobias; Franz, Markus; Rupp, Regula
2007-01-01
Background Research in evolution requires software for visualizing and editing phylogenetic trees, for increasingly very large datasets, such as arise in expression analysis or metagenomics, for example. It would be desirable to have a program that provides these services in an effcient and user-friendly way, and that can be easily installed and run on all major operating systems. Although a large number of tree visualization tools are freely available, some as a part of more comprehensive analysis packages, all have drawbacks in one or more domains. They either lack some of the standard tree visualization techniques or basic graphics and editing features, or they are restricted to small trees containing only tens of thousands of taxa. Moreover, many programs are diffcult to install or are not available for all common operating systems. Results We have developed a new program, Dendroscope, for the interactive visualization and navigation of phylogenetic trees. The program provides all standard tree visualizations and is optimized to run interactively on trees containing hundreds of thousands of taxa. The program provides tree editing and graphics export capabilities. To support the inspection of large trees, Dendroscope offers a magnification tool. The software is written in Java 1.4 and installers are provided for Linux/Unix, MacOS X and Windows XP. Conclusion Dendroscope is a user-friendly program for visualizing and navigating phylogenetic trees, for both small and large datasets. PMID:18034891
Proulx, Michael J.; Gwinnutt, James; Dell’Erba, Sara; Levy-Tzedek, Shelly; de Sousa, Alexandra A.; Brown, David J.
2015-01-01
Vision is the dominant sense for perception-for-action in humans and other higher primates. Advances in sight restoration now utilize the other intact senses to provide information that is normally sensed visually through sensory substitution to replace missing visual information. Sensory substitution devices translate visual information from a sensor, such as a camera or ultrasound device, into a format that the auditory or tactile systems can detect and process, so the visually impaired can see through hearing or touch. Online control of action is essential for many daily tasks such as pointing, grasping and navigating, and adapting to a sensory substitution device successfully requires extensive learning. Here we review the research on sensory substitution for vision restoration in the context of providing the means of online control for action in the blind or blindfolded. It appears that the use of sensory substitution devices utilizes the neural visual system; this suggests the hypothesis that sensory substitution draws on the same underlying mechanisms as unimpaired visual control of action. Here we review the current state of the art for sensory substitution approaches to object recognition, localization, and navigation, and the potential these approaches have for revealing a metamodal behavioral and neural basis for the online control of action. PMID:26599473
Visual Orientation in Unfamiliar Gravito-Inertial Environments
NASA Technical Reports Server (NTRS)
Oman, Charles M.
1999-01-01
The goal of this project is to better understand the process of spatial orientation and navigation in unfamiliar gravito-inertial environments, and ultimately to use this new information to develop effective countermeasures against the orientation and navigation problems experienced by astronauts. How do we know our location, orientation, and motion of our body with respect to the external environment ? On earth, gravity provides a convenient "down" cue. Large body rotations normally occur only in a horizontal plane. In space, the gravitational down cue is absent. When astronauts roll or pitch upside down, they must recognize where things are around them by a process of mental rotation which involves three dimensions, rather than just one. While working in unfamiliar situations they occasionally misinterpret visual cues and experience striking "visual reorientation illusions" (VRIs), in which the walls, ceiling, and floors of the spacecraft exchange subjective identities. VRIs cause disorientation, reaching errors, trigger attacks of space motion sickness, and potentially complicate emergency escape. MIR crewmembers report that 3D relationships between modules - particularly those with different visual verticals - are difficult to visualize, and so navigating through the node that connects them is not instinctive. Crew members learn routes, but their apparent lack of survey knowledge is a concern should fire, power loss, or depressurization limit visibility. Anecdotally, experience in mockups, parabolic flight, neutral buoyancy and virtual reality (VR) simulators helps. However, no techniques have been developed to quantify individual differences in orientation and navigation abilities, or the effectiveness of preflight visual. orientation training. Our understanding of the underlying physiology - for example how our sense of place and orientation is neurally coded in three dimensions in the limbic system of the brain - is incomplete. During the 16 months that this human and animal research project has been underway, we have obtained several results that are not only of basic research interest, but which have practical implications for the architecture and layout of spacecraft interiors and for the development of astronaut spatial orientation training countermeasures.
Diver-based integrated navigation/sonar sensor
NASA Astrophysics Data System (ADS)
Lent, Keith H.
1999-07-01
Two diver based systems, the Small Object Locating Sonar (SOLS) and the Integrated Navigation and Sonar Sensor (INSS) have been developed at Applied Research Laboratories, the University of Texas at Austin (ARL:UT). They are small and easy to use systems that allow a diver to: detect, classify, and identify underwater objects; render large sector visual images; and track, map and reacquire diver location, diver path, and target locations. The INSS hardware consists of a unique, simple, single beam high resolution sonar, an acoustic navigation systems, an electronic depth gauge, compass, and GPS and RF interfaces, all integrated with a standard 486 based PC. These diver sonars have been evaluated by the very shallow water mine countermeasure detachment since spring 1997. Results are very positive, showing significantly greater capabilities than current diver held systems. For example, the detection ranges are increased over existing systems, and the system allows the divers to classify mines at a significant stand off range. As a result, the INSS design has been chosen for acquisition as the next generation diver navigation and sonar system. The EDMs for this system will be designed and built by ARL:UT during 1998 and 1999 with production planned in 2000.
Baumann, Oliver; Skilleter, Ashley J.; Mattingley, Jason B.
2011-01-01
The goal of the present study was to examine the extent to which working memory supports the maintenance of object locations during active spatial navigation. Participants were required to navigate a virtual environment and to encode the location of a target object. In the subsequent maintenance period they performed one of three secondary tasks that were designed to selectively load visual, verbal or spatial working memory subsystems. Thereafter participants re-entered the environment and navigated back to the remembered location of the target. We found that while navigation performance in participants with high navigational ability was impaired only by the spatial secondary task, navigation performance in participants with poor navigational ability was impaired equally by spatial and verbal secondary tasks. The visual secondary task had no effect on navigation performance. Our results extend current knowledge by showing that the differential engagement of working memory subsystems is determined by navigational ability. PMID:21629686
Srinivasan, Mandyam V
2011-04-01
Research over the past century has revealed the impressive capacities of the honeybee, Apis mellifera, in relation to visual perception, flight guidance, navigation, and learning and memory. These observations, coupled with the relative ease with which these creatures can be trained, and the relative simplicity of their nervous systems, have made honeybees an attractive model in which to pursue general principles of sensorimotor function in a variety of contexts, many of which pertain not just to honeybees, but several other animal species, including humans. This review begins by describing the principles of visual guidance that underlie perception of the world in three dimensions, obstacle avoidance, control of flight speed, and orchestrating smooth landings. We then consider how navigation over long distances is accomplished, with particular reference to how bees use information from the celestial compass to determine their flight bearing, and information from the movement of the environment in their eyes to gauge how far they have flown. Finally, we illustrate how some of the principles gleaned from these studies are now being used to design novel, biologically inspired algorithms for the guidance of unmanned aerial vehicles.
2018-02-12
usability preference. Results under the second focus showed that the frequency with which participants expected status updates differed depending upon the...assistance requests for both navigational route and building selection depending on the type of exogenous visual cues displayed? 3) Is there a difference...in response time to visual reports for both navigational route and building selection depending on the type of exogenous visual cues displayed? 4
Computation and visualization of uncertainty in surgical navigation.
Simpson, Amber L; Ma, Burton; Vasarhelyi, Edward M; Borschneck, Dan P; Ellis, Randy E; James Stewart, A
2014-09-01
Surgical displays do not show uncertainty information with respect to the position and orientation of instruments. Data is presented as though it were perfect; surgeons unaware of this uncertainty could make critical navigational mistakes. The propagation of uncertainty to the tip of a surgical instrument is described and a novel uncertainty visualization method is proposed. An extensive study with surgeons has examined the effect of uncertainty visualization on surgical performance with pedicle screw insertion, a procedure highly sensitive to uncertain data. It is shown that surgical performance (time to insert screw, degree of breach of pedicle, and rotation error) is not impeded by the additional cognitive burden imposed by uncertainty visualization. Uncertainty can be computed in real time and visualized without adversely affecting surgical performance, and the best method of uncertainty visualization may depend upon the type of navigation display. Copyright © 2013 John Wiley & Sons, Ltd.
Otsuna, Hideo; Shinomiya, Kazunori; Ito, Kei
2014-01-01
Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior. PMID:24574974
BiNA: A Visual Analytics Tool for Biological Network Data
Gerasch, Andreas; Faber, Daniel; Küntzer, Jan; Niermann, Peter; Kohlbacher, Oliver; Lenhof, Hans-Peter; Kaufmann, Michael
2014-01-01
Interactive visual analysis of biological high-throughput data in the context of the underlying networks is an essential task in modern biomedicine with applications ranging from metabolic engineering to personalized medicine. The complexity and heterogeneity of data sets require flexible software architectures for data analysis. Concise and easily readable graphical representation of data and interactive navigation of large data sets are essential in this context. We present BiNA - the Biological Network Analyzer - a flexible open-source software for analyzing and visualizing biological networks. Highly configurable visualization styles for regulatory and metabolic network data offer sophisticated drawings and intuitive navigation and exploration techniques using hierarchical graph concepts. The generic projection and analysis framework provides powerful functionalities for visual analyses of high-throughput omics data in the context of networks, in particular for the differential analysis and the analysis of time series data. A direct interface to an underlying data warehouse provides fast access to a wide range of semantically integrated biological network databases. A plugin system allows simple customization and integration of new analysis algorithms or visual representations. BiNA is available under the 3-clause BSD license at http://bina.unipax.info/. PMID:24551056
PERCEPT: indoor navigation for the blind and visually impaired.
Ganz, Aura; Gandhi, Siddhesh Rajan; Schafer, James; Singh, Tushar; Puleo, Elaine; Mullett, Gary; Wilson, Carole
2011-01-01
In order to enhance the perception of indoor and unfamiliar environments for the blind and visually-impaired, we introduce the PERCEPT system that supports a number of unique features such as: a) Low deployment and maintenance cost; b) Scalability, i.e. we can deploy the system in very large buildings; c) An on-demand system that does not overwhelm the user, as it offers small amounts of information on demand; and d) Portability and ease-of-use, i.e., the custom handheld device carried by the user is compact and instructions are received audibly.
Lifting business process diagrams to 2.5 dimensions
NASA Astrophysics Data System (ADS)
Effinger, Philip; Spielmann, Johannes
2010-01-01
In this work, we describe our visualization approach for business processes using 2.5 dimensional techniques (2.5D). The idea of 2.5D is to add the concept of layering to a two dimensional (2D) visualization. The layers are arranged in a three-dimensional display space. For the modeling of the business processes, we use the Business Process Modeling Notation (BPMN). The benefit of connecting BPMN with a 2.5D visualization is not only to obtain a more abstract view on the business process models but also to develop layering criteria that eventually increase readability of the BPMN model compared to 2D. We present a 2.5D Navigator for BPMN models that offers different perspectives for visualization. Therefore we also develop BPMN specific perspectives. The 2.5D Navigator combines the 2.5D approach with perspectives and allows free navigation in the three dimensional display space. We also demonstrate our tool and libraries used for implementation of the visualizations. The underlying general framework for 2.5D visualizations is explored and presented in a fashion that it can easily be used for different applications. Finally, an evaluation of our navigation tool demonstrates that we can achieve satisfying and aesthetic displays of diagrams stating BPMN models in 2.5D-visualizations.
Integration for navigation on the UMASS mobile perception lab
NASA Technical Reports Server (NTRS)
Draper, Bruce; Fennema, Claude; Rochwerger, Benny; Riseman, Edward; Hanson, Allen
1994-01-01
Integration of real-time visual procedures for use on the Mobile Perception Lab (MPL) was presented. The MPL is an autonomous vehicle designed for testing visually guided behavior. Two critical areas of focus in the system design were data storage/exchange and process control. The Intermediate Symbolic Representation (ISR3) supported data storage and exchange, and the MPL script monitor provided process control. Resource allocation, inter-process communication, and real-time control are difficult problems which must be solved in order to construct strong autonomous systems.
Gnadt, William; Grossberg, Stephen
2008-06-01
How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and size-invariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.
Navigation ability dependent neural activation in the human brain: an fMRI study.
Ohnishi, Takashi; Matsuda, Hiroshi; Hirakata, Makiko; Ugawa, Yoshikazu
2006-08-01
Visual-spatial navigation in familiar and unfamiliar environments is an essential requirement of daily life. Animal studies indicated the importance of the hippocampus for navigation. Neuroimaging studies demonstrated gender difference or strategies dependent difference of neural substrates for navigation. Using functional magnetic resonance imaging, we measured brain activity related to navigation in four groups of normal volunteers: good navigators (males and females) and poor navigators (males and females). In a whole group analysis, task related activity was noted in the hippocampus, parahippocampal gyrus, posterior cingulate cortex, precuneus, parietal association areas, and the visual association areas. In group comparisons, good navigators showed a stronger activation in the medial temporal area and precuneus than poor navigators. There was neither sex effect nor interaction effect between sex and navigation ability. The activity in the left medial temporal areas was positively correlated with task performance, whereas activity in the right parietal area was negatively correlated with task performance. Furthermore, the activity in the bilateral medial temporal areas was positively correlated with scores reflecting preferred navigation strategies, whereas activity in the bilateral superior parietal lobules was negatively correlated with them. Our data suggest that different brain activities related to navigation should reflect navigation skill and strategies.
Cockpit displayed traffic information and distributed management in air traffic control
NASA Technical Reports Server (NTRS)
Kreifeldt, J. G.
1980-01-01
A graphical display of information (such as surrounding aircraft and navigation routes) in the cockpit on a cathode ray tube has been proposed for improving the safety, orderliness, and expeditiousness of the air traffic control system. An investigation of this method at NASA-Ames indicated a large reduction in controller verbal work load without increasing pilot verbal load; the visual work may be increased. The cockpit displayed traffic and navigation information system reduced response delays permitting pilots to maintain their spacing more closely and precisely than when depending entirely on controller-issued radar vectors and speed command.
Object-oriented data model of the municipal transportation
NASA Astrophysics Data System (ADS)
Pan, Yuqing; Sheng, Yehua; Zhang, Guiying
2008-10-01
The transportation problem is always one of main questions each big city all over the world faces. Managing the municipal transportation using GIS is becoming the important trend. And the data model is the transportation information system foundation. The organization and storage of the data must consider well in the system design. The data model not only needs to meet the demand that the transportation navigates, but also needs to achieve the good visual effects, also can carry on the management and the maintenance to the traffic information. According to the object-oriented theory and the method, the road is divided into segment, intersection. This paper analyzed the driveway, marking, sign and other transportation facilities and the relationship with the segment, intersection and constructed the municipal transportation data model which is adequate to the demand of vehicles navigation, visual and management. The paper also schemes the the all kinds of transportation data. The practice proves that this data model can satisfy the application demands of traffic management system.
Kamel Boulos, Maged N; Roudsari, Abdul V; Carso N, Ewart R
2002-12-01
HealthCyberMap (HCM-http://healthcybermap.semanticweb.org) is a web-based service for healthcare professionals and librarians, patients and the public in general that aims at mapping parts of the health information resources in cyberspace in novel ways to improve their retrieval and navigation. HCM adopts a clinical metadata framework built upon a clinical coding ontology for the semantic indexing, classification and browsing of Internet health information resources. A resource metadata base holds information about selected resources. HCM then uses GIS (Geographic Information Systems) spatialization methods to generate interactive navigational cybermaps from the metadata base. These visual cybermaps are based on familiar medical metaphors. HCM cybermaps can be considered as semantically spatialized, ontology-based browsing views of the underlying resource metadata base. Using a clinical coding scheme as a metric for spatialization ('semantic distance') is unique to HCM and is very much suited for the semantic categorization and navigation of Internet health information resources. Clinical codes ensure reliable and unambiguous topical indexing of these resources. HCM also introduces a useful form of cyberspatial analysis for the detection of topical coverage gaps in the resource metadata base using choropleth (shaded) maps of human body systems.
Valerio, Stephane; Clark, Benjamin J.; Chan, Jeremy H. M.; Frost, Carlton P.; Harris, Mark J.; Taube, Jeffrey S.
2010-01-01
Previous studies have identified neurons throughout the rat limbic system that fire as a function of the animal's head direction (HD). This HD signal is particularly robust when rats locomote in the horizontal and vertical planes, but is severely attenuated when locomoting upside-down (Calton & Taube, 2005). Given the hypothesis that the HD signal represents an animal's sense of its directional heading, we evaluated whether rats could accurately navigate in an inverted (upside-down) orientation. The task required the animals to find an escape hole while locomoting inverted on a circular platform suspended from the ceiling. In experiment 1, Long-Evans rats were trained to navigate to the escape hole by locomoting from either one or four start points. Interestingly, no animals from the 4-start point group reached criterion, even after 30 days of training. Animals in the 1-start point group reached criterion after about 6 training sessions. In Experiment 2, probe tests revealed that animals navigating from either 1- or 2-start points utilized distal visual landmarks for accurate orientation. However, subsequent probe tests revealed that their performance was markedly attenuated when required to navigate to the escape hole from a novel starting point. This absence of flexibility while navigating upside-down was confirmed in experiment 3 where we show that the rats do not learn to reach a place, but instead learn separate trajectories to the target hole(s). Based on these results we argue that inverted navigation primarily involves a simple directional strategy based on visual landmarks. PMID:20109566
An evaluation of unisensory and multisensory adaptive flight-path navigation displays
NASA Astrophysics Data System (ADS)
Moroney, Brian W.
1999-11-01
The present study assessed the use of unimodal (auditory or visual) and multimodal (audio-visual) adaptive interfaces to aid military pilots in the performance of a precision-navigation flight task when they were confronted with additional information-processing loads. A standard navigation interface was supplemented by adaptive interfaces consisting of either a head-up display based flight director, a 3D virtual audio interface, or a combination of the two. The adaptive interfaces provided information about how to return to the pathway when off course. Using an advanced flight simulator, pilots attempted two navigation scenarios: (A) maintain proper course under normal flight conditions and (B) return to course after their aircraft's position has been perturbed. Pilots flew in the presence or absence of an additional information-processing task presented in either the visual or auditory modality. The additional information-processing tasks were equated in terms of perceived mental workload as indexed by the NASA-TLX. Twelve experienced military pilots (11 men and 1 woman), naive to the purpose of the experiment, participated in the study. They were recruited from Wright-Patterson Air Force Base and had a mean of 2812 hrs. of flight experience. Four navigational interface configurations, the standard visual navigation interface alone (SV), SV plus adaptive visual, SV plus adaptive auditory, and SV plus adaptive visual-auditory composite were combined factorially with three concurrent tasks (CT), the no CT, the visual CT, and the auditory CT, a completely repeated measures design. The adaptive navigation displays were activated whenever the aircraft was more than 450 ft off course. In the normal flight scenario, the adaptive interfaces did not bolster navigation performance in comparison to the standard interface. It is conceivable that the pilots performed quite adequately using the familiar generic interface under normal flight conditions and hence showed no added benefit of the adaptive interfaces. In the return-to-course scenario, the relative advantages of the three adaptive interfaces were dependent upon the nature of the CT in a complex way. In the absence of a CT, recovery heading performance was superior with the adaptive visual and adaptive composite interfaces compared to the adaptive auditory interface. In the context of a visual CT, recovery when using the adaptive composite interface was superior to that when using the adaptive visual interface. Post-experimental inquiry indicated that when faced with a visual CT, the pilots used the auditory component of the multimodal guidance display to detect gross heading errors and the visual component to make more fine-grained heading adjustments. In the context of the auditory CT, navigation performance using the adaptive visual interface tended to be superior to that when using the adaptive auditory interface. Neither CT performance nor NASA-TLX workload level was influenced differentially by the interface configurations. Thus, the potential benefits associated with the proposed interfaces appear to be unaccompanied by negative side effects involving CT interference and workload. The adaptive interface configurations were altered without any direct input from the pilot. Thus, it was feared that pilots might reject the activation of interfaces independent of their control. However, pilots' debriefing comments about the efficacy of the adaptive interface approach were very positive. (Abstract shortened by UMI.)
Assistive obstacle detection and navigation devices for vision-impaired users.
Ong, S K; Zhang, J; Nee, A Y C
2013-09-01
Quality of life for the visually impaired is an urgent worldwide issue that needs to be addressed. Obstacle detection is one of the most important navigation tasks for the visually impaired. In this research, a novel range sensor placement scheme is proposed in this paper for the development of obstacle detection devices. Based on this scheme, two prototypes have been developed targeting at different user groups. This paper discusses the design issues, functional modules and the evaluation tests carried out for both prototypes. Implications for Rehabilitation Visual impairment problem is becoming more severe due to the worldwide ageing population. Individuals with visual impairment require assistance from assistive devices in daily navigation tasks. Traditional assistive devices that assist navigation may have certain drawbacks, such as the limited sensing range of a white cane. Obstacle detection devices applying the range sensor technology can identify road conditions with a higher sensing range to notify the users of potential dangers in advance.
Navigation, behaviors, and control modes in an autonomous vehicle
NASA Astrophysics Data System (ADS)
Byler, Eric A.
1995-01-01
An Intelligent Mobile Sensing System (IMSS) has been developed for the automated inspection of radioactive and hazardous waste storage containers in warehouse facilities at Department of Energy sites. A 2D space of control modes was used that provides a combined view of reactive and planning approaches wherein a 2D situation space is defined by dimensions representing the predictability of the agent's task environment and the constraint imposed by its goals. In this sense selection of appropriate systems for planning, navigation, and control depends on the problem at hand. The IMSS vehicle navigation system is based on a combination of feature based motion, landmark sightings, and an a priori logical map of the mockup storage facility. Motion for the inspection activities are composed of different interactions of several available control modes, several obstacle avoidance modes, and several feature identification modes. Features used to drive these behaviors are both visual and acoustic.
Sun, Xinlu; Chong, Heap-Yih; Liao, Pin-Chao
2018-06-25
Navigated inspection seeks to improve hazard identification (HI) accuracy. With tight inspection schedule, HI also requires efficiency. However, lacking quantification of HI efficiency, navigated inspection strategies cannot be comprehensively assessed. This work aims to determine inspection efficiency in navigated safety inspection, controlling for the HI accuracy. Based on a cognitive method of the random search model (RSM), an experiment was conducted to observe the HI efficiency in navigation, for a variety of visual clutter (VC) scenarios, while using eye-tracking devices to record the search process and analyze the search performance. The results show that the RSM is an appropriate instrument, and VC serves as a hazard classifier for navigation inspection in improving inspection efficiency. This suggests a new and effective solution for addressing the low accuracy and efficiency of manual inspection through navigated inspection involving VC and the RSM. It also provides insights into the inspectors' safety inspection ability.
33 CFR 175.135 - Existing equipment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...
33 CFR 175.135 - Existing equipment.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...
33 CFR 175.135 - Existing equipment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...
33 CFR 175.135 - Existing equipment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...
33 CFR 175.135 - Existing equipment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...
Augmented virtuality for arthroscopic knee surgery.
Li, John M; Bardana, Davide D; Stewart, A James
2011-01-01
This paper describes a computer system to visualize the location and alignment of an arthroscope using augmented virtuality. A 3D computer model of the patient's joint (from CT) is shown, along with a model of the tracked arthroscopic probe and the projection of the camera image onto the virtual joint. A user study, using plastic bones instead of live patients, was made to determine the effectiveness of this navigated display; the study showed that the navigated display improves target localization in novice residents.
OSIRIX: open source multimodality image navigation software
NASA Astrophysics Data System (ADS)
Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman
2005-04-01
The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/
A-10 Thunderbolt II (Warthog) Systems Engineering Case Study
2010-01-01
Visual Flight Rules (VFR) navigation aids. The “lean” package added Doppler Navigation for night and adverse weather, and a radar ranger and gun...and a big boost for the technology came in 1965 when the Air Force selected the TF39 engine to power the C-5 Galaxy heavy lift aircraft. Still, there...and Staff College, entitled to wear the Ranger Tab and has a real appreciation for the role of CAS in combat. Upon leaving active duty he served
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-04-01
Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Price, Richard; Marsh, Abbie J; Fisher, Marisa H
2018-03-01
Facilitating the use of public transportation enhances opportunities for independent living and competitive, community-based employment for individuals with intellectual and developmental disabilities (IDD). Four young adults with IDD were taught through total-task chaining to use the Google Maps application, a self-prompting, visual navigation system, to take the bus to locations around a college campus and the community. Three of four participants learned to use Google Maps to independently navigate public transportation. Google Maps may be helpful in supporting independent travel, highlighting the importance of future research in teaching navigation skills. Learning to independently use public transportation increases access to autonomous activities, such as opportunities to work and to attend postsecondary education programs on large college campuses.Individuals with IDD can be taught through chaining procedures to use the Google Maps application to navigate public transportation.Mobile map applications are an effective and functional modern tool that can be used to teach community navigation.
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Web-based Visual Analytics for Extreme Scale Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Evans, Katherine J; Harney, John F
In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less
A novel visualization model for web search results.
Nguyen, Tien N; Zhang, Jin
2006-01-01
This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.
Aging and Sensory Substitution in a Virtual Navigation Task.
Levy-Tzedek, S; Maidenbaum, S; Amedi, A; Lackner, J
2016-01-01
Virtual environments are becoming ubiquitous, and used in a variety of contexts-from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.
From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation
Chan, Edgar; Baumann, Oliver; Bellgrove, Mark A.; Mattingley, Jason B.
2012-01-01
Landmarks play an important role in guiding navigational behavior. A host of studies in the last 15 years has demonstrated that environmental objects can act as landmarks for navigation in different ways. In this review, we propose a parsimonious four-part taxonomy for conceptualizing object location information during navigation. We begin by outlining object properties that appear to be important for a landmark to attain salience. We then systematically examine the different functions of objects as navigational landmarks based on previous behavioral and neuroanatomical findings in rodents and humans. Evidence is presented showing that single environmental objects can function as navigational beacons, or act as associative or orientation cues. In addition, we argue that extended surfaces or boundaries can act as landmarks by providing a frame of reference for encoding spatial information. The present review provides a concise taxonomy of the use of visual objects as landmarks in navigation and should serve as a useful reference for future research into landmark-based spatial navigation. PMID:22969737
Integrated INS/GPS Navigation from a Popular Perspective
NASA Technical Reports Server (NTRS)
Omerbashich, Mensur
2002-01-01
Inertial navigation, blended with other navigation aids, Global Positioning System (GPS) in particular, has gained significance due to enhanced navigation and inertial reference performance and dissimilarity for fault tolerance and anti-jamming. Relatively new concepts based upon using Differential GPS (DGPS) blended with Inertial (and visual) Navigation Sensors (INS) offer the possibility of low cost, autonomous aircraft landing. The FAA has decided to implement the system in a sophisticated form as a new standard navigation tool during this decade. There have been a number of new inertial sensor concepts in the recent past that emphasize increased accuracy of INS/GPS versus INS and reliability of navigation, as well as lower size and weight, and higher power, fault tolerance, and long life. The principles of GPS are not discussed; rather the attention is directed towards general concepts and comparative advantages. A short introduction to the problems faced in kinematics is presented. The intention is to relate the basic principles of kinematics to probably the most used navigation method in the future-INS/GPS. An example of the airborne INS is presented, with emphasis on how it works. The discussion of the error types and sources in navigation, and of the role of filters in optimal estimation of the errors then follows. The main question this paper is trying to answer is 'What are the benefits of the integration of INS and GPS and how is this, navigation concept of the future achieved in reality?' The main goal is to communicate the idea about what stands behind a modern navigation method.
Acetylcholine contributes to the integration of self-movement cues in head direction cells.
Yoder, Ryan M; Chan, Jeremy H M; Taube, Jeffrey S
2017-08-01
Acetylcholine contributes to accurate performance on some navigational tasks, but details of its contribution to the underlying brain signals are not fully understood. The medial septal area provides widespread cholinergic input to various brain regions, but selective damage to medial septal cholinergic neurons generally has little effect on landmark-based navigation, or the underlying neural representations of location and directional heading in visual environments. In contrast, the loss of medial septal cholinergic neurons disrupts navigation based on path integration, but no studies have tested whether these path integration deficits are associated with disrupted head direction (HD) cell activity. Therefore, we evaluated HD cell responses to visual cue rotations in a familiar arena, and during navigation between familiar and novel arenas, after muscarinic receptor blockade with systemic atropine. Atropine treatment reduced the peak firing rate of HD cells, but failed to significantly affect other HD cell firing properties. Atropine also failed to significantly disrupt the dominant landmark control of the HD signal, even though we used a procedure that challenged this landmark control. In contrast, atropine disrupted HD cell stability during navigation between familiar and novel arenas, where path integration normally maintains a consistent HD cell signal across arenas. These results suggest that acetylcholine contributes to path integration, in part, by facilitating the use of idiothetic cues to maintain a consistent representation of directional heading. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
2010-03-01
1979). As drivers’ daily commuting times increase, and as new technologies such as Blackberrys , navigation systems, DVDs, etc., become more pervasive...Thomas, L.C., & Wickens, C.D. (2001). Visual displays and cognitive tunneling : frames of reference effects on spatial judgments and change
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...
NASA Astrophysics Data System (ADS)
Gupta, Shaurya; Guha, Daipayan; Jakubovic, Raphael; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation is used by surgeons in spine procedures to guide pedicle screws to improve placement accuracy and in some cases, to better visualize patient's underlying anatomy. Intraoperative registration is performed to establish a correlation between patient's anatomy and the pre/intra-operative image. Current algorithms rely on seeding points obtained directly from the exposed spinal surface to achieve clinically acceptable registration accuracy. Registration of these three dimensional surface point-clouds are prone to various systematic errors. The goal of this study was to evaluate the robustness of surgical navigation systems by looking at the relationship between the optical density of an acquired 3D point-cloud and the corresponding surgical navigation error. A retrospective review of a total of 48 registrations performed using an experimental structured light navigation system developed within our lab was conducted. For each registration, the number of points in the acquired point cloud was evaluated relative to whether the registration was acceptable, the corresponding system reported error and target registration error. It was demonstrated that the number of points in the point cloud neither correlates with the acceptance/rejection of a registration or the system reported error. However, a negative correlation was observed between the number of the points in the point-cloud and the corresponding sagittal angular error. Thus, system reported total registration points and accuracy are insufficient to gauge the accuracy of a navigation system and the operating surgeon must verify and validate registration based on anatomical landmarks prior to commencing surgery.
Clarissa Spoken Dialogue System for Procedure Reading and Navigation
NASA Technical Reports Server (NTRS)
Hieronymus, James; Dowding, John
2004-01-01
Speech is the most natural modality for humans use to communicate with other people, agents and complex systems. A spoken dialogue system must be robust to noise and able to mimic human conversational behavior, like correcting misunderstandings, answering simple questions about the task and understanding most well formed inquiries or commands. The system aims to understand the meaning of the human utterance, and if it does not, then it discards the utterance as being meant for someone else. The first operational system is Clarissa, a conversational procedure reader and navigator, which will be used in a System Development Test Objective (SDTO) on the International Space Station (ISS) during Expedition 10. In the present environment one astronaut reads the procedure on a Manual Procedure Viewer (MPV) or paper, and has to stop to read or turn pages, shifting focus from the task. Clarissa is designed to read and navigate ISS procedures entirely with speech, while the astronaut has his eyes and hands engaged in performing the task. The system also provides an MPV like graphical interface so the procedure can be read visually. A demo of the system will be given.
NASA Astrophysics Data System (ADS)
Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.
2018-04-01
Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.
Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition
NASA Astrophysics Data System (ADS)
Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro
This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.
Cogné, Mélanie; Auriacombe, Sophie; Vasa, Louise; Tison, François; Klinger, Evelyne; Sauzéon, Hélène; Joseph, Pierre-Alain; N Kaoua, Bernard
2018-05-01
To evaluate whether visual cues are helpful for virtual spatial navigation and memory in Alzheimer's disease (AD) and patients with mild cognitive impairment (MCI). 20 patients with AD, 18 patients with MCI and 20 age-matched healthy controls (HC) were included. Participants had to actively reproduce a path that included 5 intersections with one landmark at each intersection that they had seen previously during a learning phase. Three cueing conditions for navigation were offered: salient landmarks, directional arrows and a map. A path without additional visual stimuli served as control condition. Navigation time and number of trajectory mistakes were recorded. With the presence of directional arrows, no significant difference was found between groups concerning the number of trajectory mistakes and navigation time. The number of trajectory mistakes did not differ significantly between patients with AD and patients with MCI on the path with arrows, the path with salient landmarks and the path with a map. There were significant correlations between the number of trajectory mistakes under the arrow condition and executive tests, and between the number of trajectory mistakes under the salient landmark condition and memory tests. Visual cueing such as directional arrows and salient landmarks appears helpful for spatial navigation and memory tasks in patients with AD and patients with MCI. This study opens new research avenues for neuro-rehabilitation, such as the use of augmented reality in real-life settings to support the navigational capabilities of patients with MCI and patients with AD. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation
Yang, Kailun; Wang, Kaiwei; Romera, Eduardo; Hu, Weijian; Sun, Dongming; Sun, Junwei; Cheng, Ruiqi; Chen, Tianxue; López, Elena
2018-01-01
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework. PMID:29748508
Navigation-guided optic canal decompression for traumatic optic neuropathy: Two case reports.
Bhattacharjee, Kasturi; Serasiya, Samir; Kapoor, Deepika; Bhattacharjee, Harsha
2018-06-01
Two cases of traumatic optic neuropathy presented with profound loss of vision. Both cases received a course of intravenous corticosteroids elsewhere but did not improve. They underwent Navigation guided optic canal decompression via external transcaruncular approach, following which both cases showed visual improvement. Postoperative Visual Evoked Potential and optical coherence technology of Retinal nerve fibre layer showed improvement. These case reports emphasize on the role of stereotactic navigation technology for optic canal decompression in cases of traumatic optic neuropathy.
NASA Astrophysics Data System (ADS)
Štefanička, Tomáš; Ďuračiová, Renata; Seres, Csaba
2017-12-01
As a complex of buildings, the Faculty of Natural Sciences of the Comenius University in Bratislava tends to be difficult to navigate in spite of its size. An indoor navigation application could potentially save a lot of time and frustration. There are currently numerous technologies used in indoor navigation systems. Some of them focus on a high degree of precision and require significant financial investment; others provide only static information about a current location. In this paper we focused on the determination of an approximate location using inertial measurement systems available on most smartphones, i.e., a gyroscope and an accelerometer. The actual position of the device was calculated using "a walk detection method" based on a delayed lack of motion. We have developed an indoor navigation application that relies solely on open source JavaScript libraries to visualize the interior of the building and calculate the shortest path utilizing Dijsktra's routing algorithm. The application logic is located on the client side, so the software is able to work offline. Our solution represents an accessible lowcost and platform-independent web application that can significantly improve navigation at the Faculty of Natural Sciences. Although our application has been developed on a specific building complex, it could be used in other interiors as well.
[First clinical experience with extended planning and navigation in an interventional MRI unit].
Moche, M; Schmitgen, A; Schneider, J P; Bublat, M; Schulz, T; Voerkel, C; Trantakis, C; Bennek, J; Kahn, T; Busse, H
2004-07-01
To present an advanced concept for patient-based navigation and to report on our first clinical experience with interventions in the cranium, of soft-tissue structures (breast, liver) and in the musculoskeletal system. A PC-based navigation system was integrated into an existing interventional MRI environment. Intraoperatively acquired 3D data were used for interventional planning. The information content of these reference data was increased by integration of additional image modalities (e. g., fMRI, CT) and by color display of areas with early contrast media enhancement. Within 18 months, the system was used in 123 patients undergoing interventions in different anatomic regions (brain: 64, paranasal sinus: 9, breast: 20, liver: 17, bone: 9, muscle: 4). The mean duration of 64 brain interventions was compared with that of 36 procedures using the scanner's standard navigation. In contrast with the continuous scanning mode of the MR system (0.25 fps), the higher quality as well as the real time display (4 fps) of the MR images reconstructed from the 3D reference data allowed adequate hand-eye coordination. With our system, patient movement and tissue shifts could be immediately detected intraoperatively, and, in contrast to the standard procedure, navigation safely resumed after updating the reference data. The navigation system was characterized by good stability, efficient system integration and easy usability. Despite additional working steps still to be optimized, the duration of the image-guided brain tumor resections was not significantly longer. The presented system combines the advantage of intraoperative MRI with established visualization, planning, and real time capabilities of neuronavigation and can be efficiently applied in a broad range of non-neurosurgical interventions.
Integrating Space Systems Operations at the Marine Expeditionary Force Level
2015-06-01
Electromagnetic Interference ENVI Environment for Visualizing Images EW Electronic Warfare xvi FA40 Space Operations Officer FEC Fires and Effects...Information Facility SFE Space Force Enhancement SIGINT Signals Intelligence SSA Space Situational Awareness SSE Space Support Element STK Systems...April 23, 2015. 65 • GPS Interference and Navigation Tool (GIANT) for providing GPS accuracy prediction reports • Systems Toolkit ( STK ) Analysis
Takeuchi, Megumi; Sugie, Tomoharu; Abdelazeem, Kassim; Kato, Hironori; Shinkura, Nobuhiko; Takada, Masahiro; Yamashiro, Hiroyasu; Ueno, Takayuki; Toi, Masakazu
2012-01-01
The indocyanine green fluorescence (ICGf) navigation method provides real-time lymphatic mapping and sentinel lymph node (SLN) visualization, which enables the removal of SLNs and their associated lymphatic networks. In this study, we investigated the features of the drainage pathways detected with the ICGf navigation system and the order of metastasis in axillary nodes. From April 2008 to February 2010, 145 patients with clinically node-negative breast cancer underwent SLN surgery with ICGf navigation. The video-recorded data from 79 patients were used for lymphatic mapping analysis. We analyzed 145 patients with clinically node-negative breast cancer who underwent SLN surgery with the ICGf navigation system. Fluorescence-positive SLNs were identified in 144 (99%) of 145 patients. Both single and multiple routes to the axilla were identified in 47% of cases using video-recorded lymphatic mapping data. An internal mammary route was detected in 6% of the cases. Skip metastasis to the second or third SLNs was observed in 6 of the 28 node-positive patients. We also examined the strategy of axillary surgery using the ICGf navigation system. We found that, based on the features of nodal involvement, 4-node resection could provide precise information on the nodal status. The ICGf navigation system may provide a different lymphatic mapping result than computed tomography lymphography in clinically node-negative breast cancer patients. Furthermore, it enables the identification of lymph nodes that do not accumulate indocyanine green or dye adjacent to the SLNs in the sequence of drainage. Knowledge of the order of nodal metastasis as revealed by the ICGf system may help to personalize the surgical treatment of axilla in SLN-positive cases, although additional studies are required. © 2012 Wiley Periodicals, Inc.
Sukegawa, Shintaro; Kanno, Takahiro; Shibata, Akane; Matsumoto, Kenichi; Sukegawa-Takahashi, Yuka; Sakaida, Kyosuke; Furuki, Yoshihiko
2017-01-15
A fracture of root canal instruments, with a fractured piece protruding beyond the apex, is a troublesome incident during an endodontic treatment. Locating and retrieving them represents a challenge to maxillofacial surgeons because it is difficult to access due to the proximity between the foreign body and vital structures. Although safe and accurate for surgery, radiographs and electromagnetic devices do not provide a precise three-dimensional position. In contrast, computer-aided navigation provides a correlation between preoperatively collected data and intraoperatively encountered anatomy. However, using a navigation system for mandible treatment is difficult as the mobile nature of the mandible complicates its synchronization with the preoperative imaging data during surgery. This report describes a case of a dental instrument breakage in the mandible during an endodontic treatment for a restorative dental procedure in a 65-year-old Japanese woman. The broken dental instrument was removed using a minimally invasive approach with a surgical navigation system and an interocclusal splint for a stable, identically repeatable positioning of the mandible. Using the three-dimensional position of the navigation probe, a location that best approximated the most anterior extent of the fragment was selected. A minimally invasive vestibular incision was made at this location, a subperiosteal reflection was performed, and the foreign body location was confirmed using a careful navigation system. The instrument was carefully visualized and extruded from the apical to the tooth crown side and was then removed using mosquito forceps through the medullary cavity of the crown side of the tooth. Follow-up was uneventful; her clinical course was good. The use of a surgical navigation system together with an interocclusal splint enabled the retrieval of a broken dental instrument in a safe and minimally invasive manner without damaging the surrounding vital structures.
JS-MS: a cross-platform, modular javascript viewer for mass spectrometry signals.
Rosen, Jebediah; Handy, Kyle; Gillan, André; Smith, Rob
2017-11-06
Despite the ubiquity of mass spectrometry (MS), data processing tools can be surprisingly limited. To date, there is no stand-alone, cross-platform 3-D visualizer for MS data. Available visualization toolkits require large libraries with multiple dependencies and are not well suited for custom MS data processing modules, such as MS storage systems or data processing algorithms. We present JS-MS, a 3-D, modular JavaScript client application for viewing MS data. JS-MS provides several advantages over existing MS viewers, such as a dependency-free, browser-based, one click, cross-platform install and better navigation interfaces. The client includes a modular Java backend with a novel streaming.mzML parser to demonstrate the API-based serving of MS data to the viewer. JS-MS enables custom MS data processing and evaluation by providing fast, 3-D visualization using improved navigation without dependencies. JS-MS is publicly available with a GPLv2 license at github.com/optimusmoose/jsms.
A risk analysis of winter navigation in Finnish sea areas.
Valdez Banda, Osiris A; Goerlandt, Floris; Montewka, Jakub; Kujala, Pentti
2015-06-01
Winter navigation is a complex but common operation in north-European sea areas. In Finnish waters, the smooth flow of maritime traffic and safety of vessel navigation during the winter period are managed through the Finnish-Swedish winter navigation system (FSWNS). This article focuses on accident risks in winter navigation operations, beginning with a brief outline of the FSWNS. The study analyses a hazard identification model of winter navigation and reviews accident data extracted from four winter periods. These are adopted as a basis for visualizing the risks in winter navigation operations. The results reveal that experts consider ship independent navigation in ice conditions the most complex navigational operation, which is confirmed by accident data analysis showing that the operation constitutes the type of navigation with the highest number of accidents reported. The severity of the accidents during winter navigation is mainly categorized as less serious. Collision is the most typical accident in ice navigation and general cargo the type of vessel most frequently involved in these accidents. Consolidated ice, ice ridges and ice thickness between 15 and 40cm represent the most common ice conditions in which accidents occur. Thus, the analysis presented in this article establishes the key elements for identifying the operation types which would benefit most from further safety engineering and safety or risk management development. Copyright © 2015 Elsevier Ltd. All rights reserved.
The UAV take-off and landing system used for small areas of mobile vehicles
NASA Astrophysics Data System (ADS)
Ren, Tian-Yu; Duanmu, Qing-Duo; Wu, Bo-Qi
2018-03-01
In order to realize an UAV formation cluster system based on the current GPS and the fault and insufficiency of Beidou integrated navigation system in strong jamming environment. Due to the impact of the compass on the plane crash, navigation system error caused by the mobile area to help reduce the need for large landing sites and not in the small fast moving area to achieve the reality of the landing. By using Strapdown inertial and all-optical system to form Composite UAV flight control system, the photoelectric composite strapdown inertial coupling is realized, and through the laser and microwave telemetry link compound communication mechanism, using all-optical strapdown inertial and visual navigation system to solve the deviation of take-off and landing caused by electromagnetic interference, all-optical bidirectional data link realizes two-way position correction of landing site and aircraft, thus achieves the accurate recovery of UAV formation cluster in the mobile narrow area which the traditional navigation system can't realize. This system is a set of efficient unmanned aerial vehicle Group Take-off/descending system, which is suitable for many tasks, and not only realizes the reliable continuous navigation under the complex electromagnetic interference environment, moreover, the intelligent flight and Take-off and landing of unmanned aerial vehicles relative to the fast moving and small recovery sites in complex electromagnetic interference environment can not only improve the safe operation rate of unmanned aerial vehicle, but also guarantee the operation safety of the aircraft, and the more has important social value for the application foreground of the aircraft.
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
Simulating Navigation with Virtual 3d Geovisualizations - a Focus on Memory Related Factors
NASA Astrophysics Data System (ADS)
Lokka, I.; Çöltekin, A.
2016-06-01
The use of virtual environments (VE) for navigation-related studies, such as spatial cognition and path retrieval has been widely adopted in cognitive psychology and related fields. What motivates the use of VEs for such studies is that, as opposed to real-world, we can control for the confounding variables in simulated VEs. When simulating a geographic environment as a virtual world with the intention to train navigational memory in humans, an effective and efficient visual design is important to facilitate the amount of recall. However, it is not yet clear what amount of information should be included in such visual designs intended to facilitate remembering: there can be too little or too much of it. Besides the amount of information or level of detail, the types of visual features (`elements' in a visual scene) that should be included in the representations to create memorable scenes and paths must be defined. We analyzed the literature in cognitive psychology, geovisualization and information visualization, and identified the key factors for studying and evaluating geovisualization designs for their function to support and strengthen human navigational memory. The key factors we identified are: i) the individual abilities and age of the users, ii) the level of realism (LOR) included in the representations and iii) the context in which the navigation is performed, thus specific tasks within a case scenario. Here we present a concise literature review and our conceptual development for follow-up experiments.
Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven R; Conrad, Jens; Nimer Amr, Amr; Gawehn, Joachim; Giese, Alf
2017-08-01
A feasibility study. To develop a method based on the DICOM standard which transfers complex 3-dimensional (3D) trajectories and objects from external planning software to any navigation system for planning and intraoperative guidance of complex spinal procedures. There have been many reports about navigation systems with embedded planning solutions but only few on how to transfer planning data generated in external software. Patients computerized tomography and/or magnetic resonance volume data sets of the affected spinal segments were imported to Amira software, reconstructed to 3D images and fused with magnetic resonance data for soft-tissue visualization, resulting in a virtual patient model. Objects needed for surgical plans or surgical procedures such as trajectories, implants or surgical instruments were either digitally constructed or computerized tomography scanned and virtually positioned within the 3D model as required. As crucial step of this method these objects were fused with the patient's original diagnostic image data, resulting in a single DICOM sequence, containing all preplanned information necessary for the operation. By this step it was possible to import complex surgical plans into any navigation system. We applied this method not only to intraoperatively adjustable implants and objects under experimental settings, but also planned and successfully performed surgical procedures, such as the percutaneous lateral approach to the lumbar spine following preplanned trajectories and a thoracic tumor resection including intervertebral body replacement using an optical navigation system. To demonstrate the versatility and compatibility of the method with an entirely different navigation system, virtually preplanned lumbar transpedicular screw placement was performed with a robotic guidance system. The presented method not only allows virtual planning of complex surgical procedures, but to export objects and surgical plans to any navigation or guidance system able to read DICOM data sets, expanding the possibilities of embedded planning software.
Navigation assistance: a trade-off between wayfinding support and configural learning support.
Münzer, Stefan; Zimmer, Hubert D; Baus, Jörg
2012-03-01
Current GPS-based mobile navigation assistance systems support wayfinding, but they do not support learning about the spatial configuration of an environment. The present study examined effects of visual presentation modes for navigation assistance on wayfinding accuracy, route learning, and configural learning. Participants (high-school students) visited a university campus for the first time and took a predefined assisted tour. In Experiment 1 (n = 84, 42 females), a presentation mode showing wayfinding information from eye-level was contrasted with presentation modes showing wayfinding information included in views that provided comprehensive configural information. In Experiment 2 (n = 48, 24 females), wayfinding information was included in map fragments. A presentation mode which always showed north on top of the device was compared with a mode which rotated according to the orientation of the user. Wayfinding accuracy (deviations from the route), route learning, and configural learning (direction estimates, sketch maps) were assessed. Results indicated a trade-off between wayfinding and configural learning: Presentation modes providing comprehensive configural information supported the acquisition of configural knowledge at the cost of accurate wayfinding. The route presentation mode supported wayfinding at the cost of configural knowledge acquisition. Both presentation modes based on map fragments supported wayfinding. Individual differences in visual-spatial working memory capacity explained a considerable portion of the variance in wayfinding accuracy, route learning, and configural learning. It is concluded that learning about an unknown environment during assisted navigation is based on the integration of spatial information from multiple sources and can be supported by appropriate visualization. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Spatial cell firing during virtual navigation of open arenas by head-restrained mice.
Chen, Guifen; King, John Andrew; Lu, Yi; Cacucci, Francesca; Burgess, Neil
2018-06-18
We present a mouse virtual reality (VR) system which restrains head-movements to horizontal rotations, compatible with multi-photon imaging. This system allows expression of the spatial navigation and neuronal firing patterns characteristic of real open arenas (R). Comparing VR to R: place and grid, but not head-direction, cell firing had broader spatial tuning; place, but not grid, cell firing was more directional; theta frequency increased less with running speed; whereas increases in firing rates with running speed and place and grid cells' theta phase precession were similar. These results suggest that the omni-directional place cell firing in R may require local-cues unavailable in VR, and that the scale of grid and place cell firing patterns, and theta frequency, reflect translational motion inferred from both virtual (visual and proprioceptive) and real (vestibular translation and extra-maze) cues. By contrast, firing rates and theta phase precession appear to reflect visual and proprioceptive cues alone. © 2018, Chen et al.
David, R.; Stoessel, A.; Berthoz, A.; Spoor, F.; Bennequin, D.
2016-01-01
The semicircular duct system is part of the sensory organ of balance and essential for navigation and spatial awareness in vertebrates. Its function in detecting head rotations has been modelled with increasing sophistication, but the biomechanics of actual semicircular duct systems has rarely been analyzed, foremost because the fragile membranous structures in the inner ear are hard to visualize undistorted and in full. Here we present a new, easy-to-apply and non-invasive method for three-dimensional in-situ visualization and quantification of the semicircular duct system, using X-ray micro tomography and tissue staining with phosphotungstic acid. Moreover, we introduce Ariadne, a software toolbox which provides comprehensive and improved morphological and functional analysis of any visualized duct system. We demonstrate the potential of these methods by presenting results for the duct system of humans, the squirrel monkey and the rhesus macaque, making comparisons with past results from neurophysiological, oculometric and biomechanical studies. Ariadne is freely available at http://www.earbank.org. PMID:27604473
Oliveira-Santos, Thiago; Klaeser, Bernd; Weitzel, Thilo; Krause, Thomas; Nolte, Lutz-Peter; Peterhans, Matthias; Weber, Stefan
2011-01-01
Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.
Hamlet, Sean M; Haggerty, Christopher M; Suever, Jonathan D; Wehner, Gregory J; Andres, Kristin N; Powell, David K; Zhong, Xiaodong; Fornwalt, Brandon K
2017-03-01
To determine the optimal respiratory navigator gating configuration for the quantification of left ventricular strain using spiral cine displacement encoding with stimulated echoes (DENSE) MRI. Two-dimensional spiral cine DENSE was performed on a 3 Tesla MRI using two single-navigator configurations (retrospective, prospective) and a combined "dual-navigator" configuration in 10 healthy adults and 20 healthy children. The adults also underwent breathhold DENSE as a reference standard for comparisons. Peak left ventricular strains, signal-to-noise ratio (SNR), and navigator efficiency were compared. Subjects also underwent dual-navigator gating with and without visual feedback to determine the effect on navigator efficiency. There were no differences in circumferential, radial, and longitudinal strains between navigator-gated and breathhold DENSE (P = 0.09-0.95) (as confidence intervals, retrospective: [-1.0%-1.1%], [-7.4%-2.0%], [-1.0%-1.2%]; prospective: [-0.6%-2.7%], [-2.8%-8.3%], [-0.3%-2.9%]; dual: [-1.6%-0.5%], [-8.3%-3.2%], [-0.8%-1.9%], respectively). The dual configuration maintained SNR compared with breathhold acquisitions (16 versus 18, P = 0.06). SNR for the prospective configuration was lower than for the dual navigator in adults (P = 0.004) and children (P < 0.001). Navigator efficiency was higher (P < 0.001) for both retrospective (54%) and prospective (56%) configurations compared with the dual configuration (35%). Visual feedback improved the dual configuration navigator efficiency to 55% (P < 0.001). When quantifying left ventricular strains using spiral cine DENSE MRI, a dual navigator configuration results in the highest SNR in adults and children. In adults, a retrospective configuration has good navigator efficiency without a substantial drop in SNR. Prospective gating should be avoided because it has the lowest SNR. Visual feedback represents an effective option to maintain navigator efficiency while using a dual navigator configuration. 2 J. Magn. Reson. Imaging 2017;45:786-794. © 2016 International Society for Magnetic Resonance in Medicine.
Intraoperative 3-Dimensional Computed Tomography and Navigation in Foot and Ankle Surgery.
Chowdhary, Ashwin; Drittenbass, Lisca; Dubois-Ferrière, Victor; Stern, Richard; Assal, Mathieu
2016-09-01
Computer-assisted orthopedic surgery has developed dramatically during the past 2 decades. This article describes the use of intraoperative 3-dimensional computed tomography and navigation in foot and ankle surgery. Traditional imaging based on serial radiography or C-arm-based fluoroscopy does not provide simultaneous real-time 3-dimensional imaging, and thus leads to suboptimal visualization and guidance. Three-dimensional computed tomography allows for accurate intraoperative visualization of the position of bones and/or navigation implants. Such imaging and navigation helps to further reduce intraoperative complications, leads to improved surgical outcomes, and may become the gold standard in foot and ankle surgery. [Orthopedics.2016; 39(5):e1005-e1010.]. Copyright 2016, SLACK Incorporated.
HyMoTrack: A Mobile AR Navigation System for Complex Indoor Environments.
Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes
2015-12-24
Navigating in unknown big indoor environments with static 2D maps is a challenge, especially when time is a critical factor. In order to provide a mobile assistant, capable of supporting people while navigating in indoor locations, an accurate and reliable localization system is required in almost every corner of the building. We present a solution to this problem through a hybrid tracking system specifically designed for complex indoor spaces, which runs on mobile devices like smartphones or tablets. The developed algorithm only uses the available sensors built into standard mobile devices, especially the inertial sensors and the RGB camera. The combination of multiple optical tracking technologies, such as 2D natural features and features of more complex three-dimensional structures guarantees the robustness of the system. All processing is done locally and no network connection is needed. State-of-the-art indoor tracking approaches use mainly radio-frequency signals like Wi-Fi or Bluetooth for localizing a user. In contrast to these approaches, the main advantage of the developed system is the capability of delivering a continuous 3D position and orientation of the mobile device with centimeter accuracy. This makes it usable for localization and 3D augmentation purposes, e.g. navigation tasks or location-based information visualization.
HyMoTrack: A Mobile AR Navigation System for Complex Indoor Environments
Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes
2015-01-01
Navigating in unknown big indoor environments with static 2D maps is a challenge, especially when time is a critical factor. In order to provide a mobile assistant, capable of supporting people while navigating in indoor locations, an accurate and reliable localization system is required in almost every corner of the building. We present a solution to this problem through a hybrid tracking system specifically designed for complex indoor spaces, which runs on mobile devices like smartphones or tablets. The developed algorithm only uses the available sensors built into standard mobile devices, especially the inertial sensors and the RGB camera. The combination of multiple optical tracking technologies, such as 2D natural features and features of more complex three-dimensional structures guarantees the robustness of the system. All processing is done locally and no network connection is needed. State-of-the-art indoor tracking approaches use mainly radio-frequency signals like Wi-Fi or Bluetooth for localizing a user. In contrast to these approaches, the main advantage of the developed system is the capability of delivering a continuous 3D position and orientation of the mobile device with centimeter accuracy. This makes it usable for localization and 3D augmentation purposes, e.g. navigation tasks or location-based information visualization. PMID:26712755
Brayfield, Brad P.
2016-01-01
The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects’ brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path’s end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery. PMID:27119720
ERIC Educational Resources Information Center
Kraemer, David J. M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.
2017-01-01
Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to…
Deep hierarchies in the primate visual cortex: what can we learn for computer vision?
Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz
2013-08-01
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
Navigation Operational Concept,
1991-08-01
Area Control Facility AFSS Automated Flight Service Station AGL Above Ground Level ALSF-2 Approach Light System with Sequence Flasher Model 2 ATC Air...equipment contributes less than 0.30 NM error at the missed approach point. This total system use accuracy allows for flight technical error of up to...means for transition from instrument to visual flight . This function is provided by a series of standard lighting systems : the Approach Lighting
Remote Sensing of Martian Terrain Hazards via Visually Salient Feature Detection
NASA Astrophysics Data System (ADS)
Al-Milli, S.; Shaukat, A.; Spiteri, C.; Gao, Y.
2014-04-01
The main objective of the FASTER remote sensing system is the detection of rocks on planetary surfaces by employing models that can efficiently characterise rocks in terms of semantic descriptions. The proposed technique abates some of the algorithmic limitations of existing methods with no training requirements, lower computational complexity and greater robustness towards visual tracking applications over long-distance planetary terrains. Visual saliency models inspired from biological systems help to identify important regions (such as rocks) in the visual scene. Surface rocks are therefore completely described in terms of their local or global conspicuity pop-out characteristics. These local and global pop-out cues are (but not limited to); colour, depth, orientation, curvature, size, luminance intensity, shape, topology etc. The currently applied methods follow a purely bottom-up strategy of visual attention for selection of conspicuous regions in the visual scene without any topdown control. Furthermore the choice of models used (tested and evaluated) are relatively fast among the state-of-the-art and have very low computational load. Quantitative evaluation of these state-ofthe- art models was carried out using benchmark datasets including the Surrey Space Centre Lab Testbed, Pangu generated images, RAL Space SEEKER and CNES Mars Yard datasets. The analysis indicates that models based on visually salient information in the frequency domain (SRA, SDSR, PQFT) are the best performing ones for detecting rocks in an extra-terrestrial setting. In particular the SRA model seems to be the most optimum of the lot especially that it requires the least computational time while keeping errors competitively low. The salient objects extracted using these models can then be merged with the Digital Elevation Models (DEMs) generated from the same navigation cameras in order to be fused to the navigation map thus giving a clear indication of the rock locations.
Optic flow-based collision-free strategies: From insects to robots.
Serres, Julien R; Ruffier, Franck
2017-09-01
Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Galante, Joseph M.; Eepoel, John Van; Strube, Matt; Gill, Nat; Gonzalez, Marcelo; Hyslop, Andrew; Patrick, Bryan
2012-01-01
Argon is a flight-ready sensor suite with two visual cameras, a flash LIDAR, an on- board flight computer, and associated electronics. Argon was designed to provide sensing capabilities for relative navigation during proximity, rendezvous, and docking operations between spacecraft. A rigorous ground test campaign assessed the performance capability of the Argon navigation suite to measure the relative pose of high-fidelity satellite mock-ups during a variety of simulated rendezvous and proximity maneuvers facilitated by robot manipulators in a variety of lighting conditions representative of the orbital environment. A brief description of the Argon suite and test setup are given as well as an analysis of the performance of the system in simulated proximity and rendezvous operations.
Tactile-Foot Stimulation Can Assist the Navigation of People with Visual Impairment
Velázquez, Ramiro; Pissaloux, Edwige; Lay-Ekuakille, Aimé
2015-01-01
Background. Tactile interfaces that stimulate the plantar surface with vibrations could represent a step forward toward the development of wearable, inconspicuous, unobtrusive, and inexpensive assistive devices for people with visual impairments. Objective. To study how people understand information through their feet and to maximize the capabilities of tactile-foot perception for assisting human navigation. Methods. Based on the physiology of the plantar surface, three prototypes of electronic tactile interfaces for the foot have been developed. With important technological improvements between them, all three prototypes essentially consist of a set of vibrating actuators embedded in a foam shoe-insole. Perceptual experiments involving direction recognition and real-time navigation in space were conducted with a total of 60 voluntary subjects. Results. The developed prototypes demonstrated that they are capable of transmitting tactile information that is easy and fast to understand. Average direction recognition rates were 76%, 88.3%, and 94.2% for subjects wearing the first, second, and third prototype, respectively. Exhibiting significant advances in tactile-foot stimulation, the third prototype was evaluated in navigation tasks. Results show that subjects were capable of following directional instructions useful for navigating spaces. Conclusion. Footwear providing tactile stimulation can be considered for assisting the navigation of people with visual impairments. PMID:27019593
Tactile-Foot Stimulation Can Assist the Navigation of People with Visual Impairment.
Velázquez, Ramiro; Pissaloux, Edwige; Lay-Ekuakille, Aimé
2015-01-01
Background. Tactile interfaces that stimulate the plantar surface with vibrations could represent a step forward toward the development of wearable, inconspicuous, unobtrusive, and inexpensive assistive devices for people with visual impairments. Objective. To study how people understand information through their feet and to maximize the capabilities of tactile-foot perception for assisting human navigation. Methods. Based on the physiology of the plantar surface, three prototypes of electronic tactile interfaces for the foot have been developed. With important technological improvements between them, all three prototypes essentially consist of a set of vibrating actuators embedded in a foam shoe-insole. Perceptual experiments involving direction recognition and real-time navigation in space were conducted with a total of 60 voluntary subjects. Results. The developed prototypes demonstrated that they are capable of transmitting tactile information that is easy and fast to understand. Average direction recognition rates were 76%, 88.3%, and 94.2% for subjects wearing the first, second, and third prototype, respectively. Exhibiting significant advances in tactile-foot stimulation, the third prototype was evaluated in navigation tasks. Results show that subjects were capable of following directional instructions useful for navigating spaces. Conclusion. Footwear providing tactile stimulation can be considered for assisting the navigation of people with visual impairments.
Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments
2016-09-01
yet to be discussed in existing supervised multi-concept visual perception systems used in robotics applications.1,5–7 Anno - tation of images is...Autonomous robot navigation in highly populated pedestrian zones. J Field Robotics. 2015;32(4):565–589. 3. Milella A, Reina G, Underwood J . A self...learning framework for statistical ground classification using RADAR and monocular vision. J Field Robotics. 2015;32(1):20–41. 4. Manjanna S, Dudek G
The Digital Space Shuttle, 3D Graphics, and Knowledge Management
NASA Technical Reports Server (NTRS)
Gomez, Julian E.; Keller, Paul J.
2003-01-01
The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.
Spacecraft Guidance, Navigation, and Control Visualization Tool
NASA Technical Reports Server (NTRS)
Mandic, Milan; Acikmese, Behcet; Blackmore, Lars
2011-01-01
G-View is a 3D visualization tool for supporting spacecraft guidance, navigation, and control (GN&C) simulations relevant to small-body exploration and sampling (see figure). The tool is developed in MATLAB using Virtual Reality Toolbox and provides users with the ability to visualize the behavior of their simulations, regardless of which programming language (or machine) is used to generate simulation results. The only requirement is that multi-body simulation data is generated and placed in the proper format before applying G-View.
Low Cost Embedded Stereo System for Underwater Surveys
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.
2017-11-01
This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.
Neubauer, Aljoscha S; Langer, Julian; Liegl, Raffael; Haritoglou, Christos; Wolf, Armin; Kozak, Igor; Seidensticker, Florian; Ulbig, Michael; Freeman, William R; Kampik, Anselm; Kernt, Marcus
2013-01-01
The purpose of this study was to evaluate and compare clinical outcomes and retreatment rates using navigated macular laser versus conventional laser for the treatment of diabetic macular edema (DME). In this prospective, interventional pilot study, 46 eyes from 46 consecutive patients with DME were allocated to receive macular laser photocoagulation using navigated laser. Best corrected visual acuity and retreatment rate were evaluated for up to 12 months after treatment. The control group was drawn based on chart review of 119 patients treated by conventional laser at the same institutions during the same time period. Propensity score matching was performed with Stata, based on the nearest-neighbor method. Propensity score matching for age, gender, baseline visual acuity, and number of laser spots yielded 28 matched patients for the control group. Visual acuity after navigated macular laser improved from a mean 0.48 ± 0.37 logMAR by a mean +2.9 letters after 3 months, while the control group showed a mean -4.0 letters (P = 0.03). After 6 months, navigated laser maintained a mean visual gain of +3.3 letters, and the conventional laser group showed a slower mean increase to +1.9 letters versus baseline. Using Kaplan-Meier analysis, the laser retreatment rate showed separation of the survival curves after 2 months, with fewer retreatments in the navigated group than in the conventional laser group during the first 8 months (18% versus 31%, respectively, P = 0.02). The short-term results of this pilot study suggest that navigated macular photocoagulation is an effective technique and could be considered as a valid alternative to conventional slit-lamp laser for DME when focal laser photocoagulation is indicated. The observed lower retreatment rates with navigated retinal laser therapy in the first 8 months suggest a more durable treatment effect.
Navigation lymphatic supermicrosurgery for the treatment of cancer-related peripheral lymphedema.
Yamamoto, Takumi; Yamamoto, Nana; Numahata, Takao; Yokoyama, Ai; Tashiro, Kensuke; Yoshimatsu, Hidehiko; Narushima, Mitsunaga; Koshima, Isao
2014-02-01
Lymphatic supermicrosurgery is becoming the treatment of choice for refractory lymphedema. Detection and anastomosis of functional lymphatic vessels are important for lymphatic supermicrosurgery. Navigation lymphatic supermicrosurgery was performed using an operating microscope equipped with an integrated near-infrared illumination system (OPMI Pentero Infrared 800; Carl Zeiss, Oberkochen, Germany). Eight patients with extremity lymphedema who underwent navigation lymphatic supermicrosurgery were evaluated. A total of 21 lymphaticovenular anastomoses were performed on 8 limbs through 14 skin incisions. Lymphatic vessels were enhanced by intraoperative microscopic indocyanine green (ICG) lymphography in 12 of the 14 skin incisions, which resulted in early dissection of lymphatic vessels. All anastomoses showed good anastomosis patency after completion of anastomoses. Postoperative extremity lymphedema index decreased in all limbs. Navigation lymphatic supermicrosurgery, in which lymphatic vessels are visualized with intraoperative microscopic ICG lymphography, allows a lymphatic supermicrosurgeon to find and dissect lymphatic vessels earlier and facilitates successful performance of lymphaticovenular anastomosis.
Hamlet, Sean M.; Haggerty, Christopher M.; Suever, Jonathan D.; Wehner, Gregory J.; Andres, Kristin N.; Powell, David K.; Fornwalt, Brandon K.
2016-01-01
Purpose To determine the optimal respiratory navigator gating configuration for the quantification of left ventricular strain using spiral cine displacement encoding with stimulated echoes (DENSE) MRI. Materials and Methods 2D spiral cine DENSE was performed on a 3T MRI using two single-navigator configurations (retrospective, prospective), and a combined “dual-navigator” configuration in 10 healthy adults and 20 healthy children. The adults also underwent breath-hold DENSE as a reference standard for comparisons. Peak left ventricular strains, signal-to-noise ratio (SNR) and navigator efficiency were compared. Subjects also underwent dual-navigator gating with and without visual feedback to determine the effect on navigator efficiency. Results There were no differences in circumferential, radial and longitudinal strains between navigator-gated and breath-hold DENSE (p=0.09–0.95) (as confidence intervals, retrospective: [−1.0%,1.1%],[−7.4%,2.0%],[−1.0%,1.2%]; prospective: [−0.6%,2.7%],[−2.8%,8.3%],[−0.3%,2.9%]; dual: [−1.6%,0.5%],[−8.3%,3.2%],[−0.8%,1.9%], respectively). The dual configuration maintained SNR compared to breath-hold acquisitions (16 vs. 18, p=0.06). SNR for the prospective configuration was lower than for the dual navigator in adults (p=0.004) and children (p<0.001). Navigator efficiency was higher (p<0.001) for both retrospective (54%) and prospective (56%) configurations compared to the dual configuration (35%). Visual feedback improved the dual configuration navigator efficiency to 55% (p<0.001). Conclusion When quantifying left ventricular strains using spiral cine DENSE MRI, a dual navigator configuration results in the highest SNR in adults and children. In adults, a retrospective configuration has good navigator efficiency without a substantial drop in SNR. Prospective gating should be avoided since it has the lowest SNR. Visual feedback represents an effective option to maintain navigator efficiency while using a dual navigator configuration. PMID:27458823
An infrared/video fusion system for military robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A.W.; Roberts, R.S.
1997-08-05
Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less
GPS/MEMS IMU/Microprocessor Board for Navigation
NASA Technical Reports Server (NTRS)
Gender, Thomas K.; Chow, James; Ott, William E.
2009-01-01
A miniaturized instrumentation package comprising a (1) Global Positioning System (GPS) receiver, (2) an inertial measurement unit (IMU) consisting largely of surface-micromachined sensors of the microelectromechanical systems (MEMS) type, and (3) a microprocessor, all residing on a single circuit board, is part of the navigation system of a compact robotic spacecraft intended to be released from a larger spacecraft [e.g., the International Space Station (ISS)] for exterior visual inspection of the larger spacecraft. Variants of the package may also be useful in terrestrial collision-detection and -avoidance applications. The navigation solution obtained by integrating the IMU outputs is fed back to a correlator in the GPS receiver to aid in tracking GPS signals. The raw GPS and IMU data are blended in a Kalman filter to obtain an optimal navigation solution, which can be supplemented by range and velocity data obtained by use of (l) a stereoscopic pair of electronic cameras aboard the robotic spacecraft and/or (2) a laser dynamic range imager aboard the ISS. The novelty of the package lies mostly in those aspects of the design of the MEMS IMU that pertain to controlling mechanical resonances and stabilizing scale factors and biases.
Turlure, Camille; Schtickzelle, Nicolas; Van Dyck, Hans; Seymoure, Brett; Rutowski, Ronald
2016-01-01
Understanding dispersal is of prime importance in conservation and population biology. Individual traits related to motion and navigation during dispersal may differ: (1) among species differing in habitat distribution, which in turn, may lead to interspecific differences in the potential for and costs of dispersal, (2) among populations of a species that experiences different levels of habitat fragmentation; (3) among individuals differing in their dispersal strategy and (4) between the sexes due to sexual differences in behaviour and dispersal tendencies. In butterflies, the visual system plays a central role in dispersal, but exactly how the visual system is related to dispersal has received far less attention than flight morphology. We studied two butterfly species to explore the relationships between flight and eye morphology, and dispersal. We predicted interspecific, intraspecific and intersexual differences for both flight and eye morphology relative to i) species-specific habitat distribution, ii) variation in dispersal strategy within each species and iii) behavioural differences between sexes. However, we did not investigate for potential population differences. We found: (1) sexual differences that presumably reflect different demands on both male and female visual and flight systems, (2) a higher wing loading (i.e. a proxy for flight performance), larger eyes and larger facet sizes in the frontal and lateral region of the eye (i.e. better navigation capacities) in the species inhabiting naturally fragmented habitat compared to the species inhabiting rather continuous habitat, and (3) larger facets in the frontal region in dispersers compared to residents within a species. Hence, dispersers may have similar locomotory capacity but potentially better navigation capacity. Dispersal ecology and evolution have attracted much attention, but there are still significant gaps in our understanding of the mechanisms of dispersal. Unfortunately, for many species we lack detailed information on the role of behavioural, morphological and physiological traits for dispersal. Our novel study supports the existence of inter- and intra-specific evolutionary responses in both motion and navigation capacities (i.e. flight and eye morphology) linked to dispersal.
A mobile phone system to find crosswalks for visually impaired pedestrians
Shen, Huiying; Chan, Kee-Yip; Coughlan, James; Brabyn, John
2010-01-01
Urban intersections are the most dangerous parts of a blind or visually impaired pedestrian’s travel. A prerequisite for safely crossing an intersection is entering the crosswalk in the right direction and avoiding the danger of straying outside the crosswalk. This paper presents a proof of concept system that seeks to provide such alignment information. The system consists of a standard mobile phone with built-in camera that uses computer vision algorithms to detect any crosswalk visible in the camera’s field of view; audio feedback from the phone then helps the user align him/herself to it. Our prototype implementation on a Nokia mobile phone runs in about one second per image, and is intended for eventual use in a mobile phone system that will aid blind and visually impaired pedestrians in navigating traffic intersections. PMID:20411035
Skordis-Worrall, Jolene; Pulkki-Brännström, Anni-Maria; Utley, Martin; Kembhavi, Gayatri; Bricki, Nouria; Dutoit, Xavier; Rosato, Mikey; Pagel, Christina
2012-12-21
There are calls for low and middle income countries to develop robust health financing policies to increase service coverage. However, existing evidence around financing options is complex and often difficult for policy makers to access. To summarize the evidence on the impact of financing health systems and develop an e-tool to help decision makers navigate the findings. After reviewing the literature, we used thematic analysis to summarize the impact of 7 common health financing mechanisms on 5 common health system goals. Information on the relevance of each study to a user's context was provided by 11 country indicators. A Web-based e-tool was then developed to assist users in navigating the literature review. This tool was evaluated using feedback from early users, collected using an online survey and in-depth interviews with key informants. The e-tool provides graphical summaries that allow a user to assess the following parameters with a single snapshot: the number of relevant studies available in the literature, the heterogeneity of evidence, where key evidence is lacking, and how closely the evidence matches their own context. Users particularly liked the visual display and found navigating the tool intuitive. However there was concern that a lack of evidence on positive impact might be construed as evidence against a financing option and that the tool might over-simplify the available financing options. Complex evidence can be made more easily accessible and potentially more understandable using basic Web-based technology and innovative graphical representations that match findings to the users' goals and context.
NASA Astrophysics Data System (ADS)
Müller, M. S.; Urban, S.; Jutzi, B.
2017-08-01
The number of unmanned aerial vehicles (UAVs) is increasing since low-cost airborne systems are available for a wide range of users. The outdoor navigation of such vehicles is mostly based on global navigation satellite system (GNSS) methods to gain the vehicles trajectory. The drawback of satellite-based navigation are failures caused by occlusions and multi-path interferences. Beside this, local image-based solutions like Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) can e.g. be used to support the GNSS solution by closing trajectory gaps but are computationally expensive. However, if the trajectory estimation is interrupted or not available a re-localization is mandatory. In this paper we will provide a novel method for a GNSS-free and fast image-based pose regression in a known area by utilizing a small convolutional neural network (CNN). With on-board processing in mind, we employ a lightweight CNN called SqueezeNet and use transfer learning to adapt the network to pose regression. Our experiments show promising results for GNSS-free and fast localization.
ERIC Educational Resources Information Center
Davies, Daniel K.; Stock, Steven E.; Holloway, Shane; Wehmeyer, Michael L.
2010-01-01
We examined the utility of a PDA-based software system with integrated GPS technology for providing location-aware visual and auditory prompts to enable people with intellectual disability to successfully navigate a downtown bus route. Participants using the system were significantly more successful at completing a bus route than were people in a…
Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.
Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe
2017-09-01
Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Address entry while driving: speech recognition versus a touch-screen keyboard.
Tsimhoni, Omer; Smith, Daniel; Green, Paul
2004-01-01
A driving simulator experiment was conducted to determine the effects of entering addresses into a navigation system during driving. Participants drove on roads of varying visual demand while entering addresses. Three address entry methods were explored: word-based speech recognition, character-based speech recognition, and typing on a touch-screen keyboard. For each method, vehicle control and task measures, glance timing, and subjective ratings were examined. During driving, word-based speech recognition yielded the shortest total task time (15.3 s), followed by character-based speech recognition (41.0 s) and touch-screen keyboard (86.0 s). The standard deviation of lateral position when performing keyboard entry (0.21 m) was 60% higher than that for all other address entry methods (0.13 m). Degradation of vehicle control associated with address entry using a touch screen suggests that the use of speech recognition is favorable. Speech recognition systems with visual feedback, however, even with excellent accuracy, are not without performance consequences. Applications of this research include the design of in-vehicle navigation systems as well as other systems requiring significant driver input, such as E-mail, the Internet, and text messaging.
An assessment of auditory-guided locomotion in an obstacle circumvention task.
Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina
2016-06-01
This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle's location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.
Evolved Navigation Theory and Horizontal Visual Illusions
ERIC Educational Resources Information Center
Jackson, Russell E.; Willey, Chela R.
2011-01-01
Environmental perception is prerequisite to most vertebrate behavior and its modern investigation initiated the founding of experimental psychology. Navigation costs may affect environmental perception, such as overestimating distances while encumbered (Solomon, 1949). However, little is known about how this occurs in real-world navigation or how…
An Efficient Model-Based Image Understanding Method for an Autonomous Vehicle.
1997-09-01
The problem discussed in this dissertation is the development of an efficient method for visual navigation of autonomous vehicles . The approach is to... autonomous vehicles . Thus the new method is implemented as a component of the image-understanding system in the autonomous mobile robot Yamabico-11 at
The Brussels Metro: Accessibility through Collaboration
ERIC Educational Resources Information Center
Strickfaden, Megan; Devlieger, Patrick
2011-01-01
This article describes and analyzes the development of a navigation and orientation system for people with visual impairments as it evolved over three decades. It includes reflections on how users have been involved in the redesign process and illustrates how people with and without disabilities have collaborated to create a more suitable and…
Fully Three-Dimensional Virtual-Reality System
NASA Technical Reports Server (NTRS)
Beckman, Brian C.
1994-01-01
Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.
Improved obstacle avoidance and navigation for an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Giri, Binod; Cho, Hyunsu; Williams, Benjamin C.; Tann, Hokchhay; Shakya, Bicky; Bharam, Vishal; Ahlgren, David J.
2015-01-01
This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 Intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.
A novel visual-inertial monocular SLAM
NASA Astrophysics Data System (ADS)
Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo
2018-02-01
With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.
Navigational Guidance and Ablation Planning Tools for Interventional Radiology.
Sánchez, Yadiel; Anvari, Arash; Samir, Anthony E; Arellano, Ronald S; Prabhakar, Anand M; Uppot, Raul N
Image-guided biopsy and ablation relies on successful identification and targeting of lesions. Currently, image-guided procedures are routinely performed under ultrasound, fluoroscopy, magnetic resonance imaging, or computed tomography (CT) guidance. However, these modalities have their limitations including inadequate visibility of the lesion, lesion or organ or patient motion, compatibility of instruments in an magnetic resonance imaging field, and, for CT and fluoroscopy cases, radiation exposure. Recent advances in technology have resulted in the development of a new generation of navigational guidance tools that can aid in targeting lesions for biopsy or ablations. These navigational guidance tools have evolved from simple hand-held trajectory guidance tools, to electronic needle visualization, to image fusion, to the development of a body global positioning system, to growth in cone-beam CT, and to ablation volume planning. These navigational systems are promising technologies that not only have the potential to improve lesion targeting (thereby increasing diagnostic yield of a biopsy or increasing success of tumor ablation) but also have the potential to decrease radiation exposure to the patient and staff, decrease procedure time, decrease the sedation requirements, and improve patient safety. The purpose of this article is to describe the challenges in current standard image-guided techniques, provide a definition and overview for these next-generation navigational devices, and describe the current limitations of these, still evolving, next-generation navigational guidance tools. Copyright © 2017 Elsevier Inc. All rights reserved.
Neural correlates of virtual route recognition in congenital blindness.
Kupers, Ron; Chebat, Daniel R; Madsen, Kristoffer H; Paulson, Olaf B; Ptito, Maurice
2010-07-13
Despite the importance of vision for spatial navigation, blind subjects retain the ability to represent spatial information and to move independently in space to localize and reach targets. However, the neural correlates of navigation in subjects lacking vision remain elusive. We therefore used functional MRI (fMRI) to explore the cortical network underlying successful navigation in blind subjects. We first trained congenitally blind and blindfolded sighted control subjects to perform a virtual navigation task with the tongue display unit (TDU), a tactile-to-vision sensory substitution device that translates a visual image into electrotactile stimulation applied to the tongue. After training, participants repeated the navigation task during fMRI. Although both groups successfully learned to use the TDU in the virtual navigation task, the brain activation patterns showed substantial differences. Blind but not blindfolded sighted control subjects activated the parahippocampus and visual cortex during navigation, areas that are recruited during topographical learning and spatial representation in sighted subjects. When the navigation task was performed under full vision in a second group of sighted participants, the activation pattern strongly resembled the one obtained in the blind when using the TDU. This suggests that in the absence of vision, cross-modal plasticity permits the recruitment of the same cortical network used for spatial navigation tasks in sighted subjects.
NASA Astrophysics Data System (ADS)
Chi, Chongwei; Zhang, Qian; Kou, Deqiang; Ye, Jinzuo; Mao, Yamin; Qiu, Jingdan; Wang, Jiandong; Yang, Xin; Du, Yang; Tian, Jie
2014-02-01
Currently, it has been an international focus on intraoperative precise positioning and accurate resection of tumor and metastases. The methods such as X-rays, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role in preoperative accurate diagnosis. However, most of them are inapplicable for intraoperative surgery. We have proposed a surgical navigation system based on optical molecular imaging technology for intraoperative detection of tumors and metastasis. This system collects images from two CCD cameras for real-time fluorescent and color imaging. For image processing, the template matching algorithm is used for multispectral image fusion. For the application of tumor detection, the mouse breast cancer cell line 4T1-luc, which shows highly metastasis, was used for tumor model establishment and a model of matrix metalloproteinase (MMP) expressing breast cancer. The tumor-bearing nude mice were given tail vein injection of MMP 750FAST (PerkinElmer, Inc. USA) probe and imaged with both bioluminescence and fluorescence to assess in vivo binding of the probe to the tumor and metastases sites. Hematoxylin and eosin (H&E) staining was performed to confirm the presence of tumor and metastasis. As a result, one tumor can be observed visually in vivo. However liver metastasis has been detected under surgical navigation system and all were confirmed by histology. This approach helps surgeons to find orthotopic tumors and metastasis during intraoperative resection and visualize tumor borders for precise positioning. Further investigation is needed for future application in clinics.
Visual Place Learning in Drosophila melanogaster
Ofstad, Tyler A.; Zuker, Charles S.; Reiser, Michael B.
2011-01-01
The ability of insects to learn and navigate to specific locations in the environment has fascinated naturalists for decades. While the impressive navigation abilities of ants, bees, wasps, and other insects clearly demonstrate that insects are capable of visual place learning1–4, little is known about the underlying neural circuits that mediate these behaviors. Drosophila melanogaster is a powerful model organism for dissecting the neural circuitry underlying complex behaviors, from sensory perception to learning and memory. Flies can identify and remember visual features such as size, color, and contour orientation5, 6. However, the extent to which they use vision to recall specific locations remains unclear. Here we describe a visual place-learning platform and demonstrate that Drosophila are capable of forming and retaining visual place memories to guide selective navigation. By targeted genetic silencing of small subsets of cells in the Drosophila brain we show that neurons in the ellipsoid body, but not in the mushroom bodies, are necessary for visual place learning. Together, these studies reveal distinct neuroanatomical substrates for spatial versus non-spatial learning, and substantiate Drosophila as a powerful model for the study of spatial memories. PMID:21654803
Design and Development of a Mobile Sensor Based the Blind Assistance Wayfinding System
NASA Astrophysics Data System (ADS)
Barati, F.; Delavar, M. R.
2015-12-01
The blind and visually impaired people are facing a number of challenges in their daily life. One of the major challenges is finding their way both indoor and outdoor. For this reason, routing and navigation independently, especially in urban areas are important for the blind. Most of the blind undertake route finding and navigation with the help of a guide. In addition, other tools such as a cane, guide dog or electronic aids are used by the blind. However, in some cases these aids are not efficient enough in a wayfinding around obstacles and dangerous areas for the blind. As a result, the need to develop effective methods as decision support using a non-visual media is leading to improve quality of life for the blind through their increased mobility and independence. In this study, we designed and implemented an outdoor mobile sensor-based wayfinding system for the blind. The objectives of this study are to guide the blind for the obstacle recognition and the design and implementation of a wayfinding and navigation mobile sensor system for them. In this study an ultrasonic sensor is used to detect obstacles and GPS is employed for positioning and navigation in the wayfinding. This type of ultrasonic sensor measures the interval between sending waves and receiving the echo signals with respect to the speed of sound in the environment to estimate the distance to the obstacles. In this study the coordinates and characteristics of all the obstacles in the study area are already stored in a GIS database. All of these obstacles were labeled on the map. The ultrasonic sensor designed and constructed in this study has the ability to detect the obstacles in a distance of 2cm to 400cm. The implementation and the results obtained from the interview of a number of blind persons who employed the sensor verified that the designed mobile sensor system for wayfinding was very satisfactory.
2015-06-01
Visualization, Graph Navigation, Visual Literacy 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 19a. NAME OF...3 2.3. Visual Literacy ...obscured and individual edges that could be traversed before bundled are now completely lost among the bundled edges. 2.3. Visual Literacy Visual
The current status and future prospects of computer-assisted hip surgery.
Inaba, Yutaka; Kobayashi, Naomi; Ike, Hiroyuki; Kubota, So; Saito, Tomoyuki
2016-03-01
The advances in computer assistance technology have allowed detailed three-dimensional preoperative planning and simulation of preoperative plans. The use of a navigation system as an intraoperative assistance tool allows more accurate execution of the preoperative plan, compared to manual operation without assistance of the navigation system. In total hip arthroplasty using CT-based navigation, three-dimensional preoperative planning with computer software allows the surgeon to determine the optimal angle of implant placement at which implant impingement is unlikely to occur in the range of hip joint motion necessary for daily activities of living, and to determine the amount of three-dimensional correction for leg length and offset. With the use of computer navigation for intraoperative assistance, the preoperative plan can be precisely executed. In hip osteotomy using CT-based navigation, the navigation allows three-dimensional preoperative planning, intraoperative confirmation of osteotomy sites, safe performance of osteotomy even under poor visual conditions, and a reduction in exposure doses from intraoperative fluoroscopy. Positions of the tips of chisels can be displayed on the computer monitor during surgery in real time, and staff other than the operator can also be aware of the progress of surgery. Thus, computer navigation also has an educational value. On the other hand, its limitations include the need for placement of trackers, increased radiation exposure from preoperative CT scans, and prolonged operative time. Moreover, because the position of a bone fragment cannot be traced after osteotomy, methods to find its precise position after its movement need to be developed. Despite the need to develop methods for the postoperative evaluation of accuracy for osteotomy, further application and development of these systems are expected in the future. Copyright © 2016 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.
Chen, Xiaojun; Xu, Lu; Wang, Yiping; Wang, Huixiang; Wang, Fang; Zeng, Xiangsen; Wang, Qiugen; Egger, Jan
2015-06-01
The surgical navigation system has experienced tremendous development over the past decades for minimizing the risks and improving the precision of the surgery. Nowadays, Augmented Reality (AR)-based surgical navigation is a promising technology for clinical applications. In the AR system, virtual and actual reality are mixed, offering real-time, high-quality visualization of an extensive variety of information to the users (Moussa et al., 2012) [1]. For example, virtual anatomical structures such as soft tissues, blood vessels and nerves can be integrated with the real-world scenario in real time. In this study, an AR-based surgical navigation system (AR-SNS) is developed using an optical see-through HMD (head-mounted display), aiming at improving the safety and reliability of the surgery. With the use of this system, including the calibration of instruments, registration, and the calibration of HMD, the 3D virtual critical anatomical structures in the head-mounted display are aligned with the actual structures of patient in real-world scenario during the intra-operative motion tracking process. The accuracy verification experiment demonstrated that the mean distance and angular errors were respectively 0.809±0.05mm and 1.038°±0.05°, which was sufficient to meet the clinical requirements. Copyright © 2015 Elsevier Inc. All rights reserved.
Obstacle Characterization in a Geocrowdsourced Accessibility System
NASA Astrophysics Data System (ADS)
Qin, H.; Aburizaiza, A. O.; Rice, R. M.; Paez, F.; Rice, M. T.
2015-08-01
Transitory obstacles - random, short-lived and unpredictable objects - are difficult to capture in any traditional mapping system, yet they have significant negative impacts on the accessibility of mobility- and visually-impaired individuals. These transitory obstacles include sidewalk obstructions, construction detours, and poor surface conditions. To identify these obstacles and assist the navigation of mobility- and visually- impaired individuals, crowdsourced mapping applications have been developed to harvest and analyze the volunteered obstacles reports from local students, faculty, staff, and residents. In this paper, we introduce a training program designed and implemented for recruiting and motivating contributors to participate in our geocrowdsourced accessibility system, and explore the quality of geocrowdsourced data with a comparative analysis methodology.
A Haptic Glove as a Tactile-Vision Sensory Substitution for Wayfinding.
ERIC Educational Resources Information Center
Zelek, John S.; Bromley, Sam; Asmar, Daniel; Thompson, David
2003-01-01
A device that relays navigational information using a portable tactile glove and a wearable computer and camera system was tested with nine adults with visual impairments. Paths traversed by subjects negotiating an obstacle course were not qualitatively different from paths produced with existing wayfinding devices and hitting probabilities were…
Akesson, Susanne; Wehner, Rüdiger
2002-07-01
Central-place foraging insects such as desert ants of the genus Cataglyphis use both path integration and landmarks to navigate during foraging excursions. The use of landmark information and a celestial system of reference for nest location was investigated by training desert ants returning from an artificial feeder to find the nest at one of four alternative positions located asymmetrically inside a four-cylinder landmark array. The cylindrical landmarks were all of the same size and arranged in a square, with the nest located in the southeast corner. When released from the compass direction experienced during training (southeast), the ants searched most intensely at the fictive nest position. When instead released from any of the three alternative directions of approach (southwest, northwest or northeast), the same individuals instead searched at two of the four alternative positions by initiating their search at the position closest to the direction of approach when entering the landmark square and then returning to the position at which snapshot, current landmark image and celestial reference information were in register. The results show that, in the ants' visual snapshot memory, a memorized landmark scene can temporarily be decoupled from a memorized celestial system of reference.
Navigation for fluoroscopy-guided cryo-balloon ablation procedures of atrial fibrillation
NASA Astrophysics Data System (ADS)
Bourier, Felix; Brost, Alexander; Kleinoeder, Andreas; Kurzendorfer, Tanja; Koch, Martin; Kiraly, Attila; Schneider, Hans-Juergen; Hornegger, Joachim; Strobel, Norbert; Kurzidim, Klaus
2012-02-01
Atrial fibrillation (AFib), the most common arrhythmia, has been identified as a major cause of stroke. The current standard in interventional treatment of AFib is the pulmonary vein isolation (PVI). PVI is guided by fluoroscopy or non-fluoroscopic electro-anatomic mapping systems (EAMS). Either classic point-to-point radio-frequency (RF)- catheter ablation or so-called single-shot-devices like cryo-balloons are used to achieve electrically isolation of the pulmonary veins and the left atrium (LA). Fluoroscopy-based systems render overlay images from pre-operative 3-D data sets which are then merged with fluoroscopic imaging, thereby adding detailed 3-D information to conventional fluoroscopy. EAMS provide tracking and visualization of RF catheters by means of electro-magnetic tracking. Unfortunately, current navigation systems, fluoroscopy-based or EAMS, do not provide tools to localize and visualize single shot devices like cryo-balloon catheters in 3-D. We present a prototype software for fluoroscopy-guided ablation procedures that is capable of superimposing 3-D datasets as well as reconstructing cyro-balloon catheters in 3-D. The 3-D cyro-balloon reconstruction was evaluated on 9 clinical data sets, yielded a reprojected 2-D error of 1.72 mm +/- 1.02 mm.
NASA Astrophysics Data System (ADS)
Makarov, V.; Korelin, O.; Koblyakov, D.; Kostin, S.; Komandirov, A.
2018-02-01
The article is devoted to the development of the Advanced Driver Assistance Systems (ADAS) for the GAZelle NEXT car. This project is aimed at developing a visual information system for the driver integrated into the windshield racks. The developed system implements the following functions: assistance in maneuvering and parking; Recognition of road signs; Warning the driver about the possibility of a frontal collision; Control of "blind" zones; "Transparent" vision in the windshield racks, widening the field of view, behind them; Visual and sound information about the traffic situation; Control and descent from the lane of the vehicle; Monitoring of the driver’s condition; navigation system; All-round review. The scheme of action of sensors of the developed system of visual information of the driver is provided. The moments of systems on a prototype of a vehicle are considered. Possible changes in the interior and dashboard of the car are given. The results of the implementation are aimed at the implementation of the system - improved informing of the driver about the environment and the development of an ergonomic interior for this system within the new Functional Salon of the Gazelle Next vehicle equipped with a visual information system for the driver.
Solar photovoltaic systems in the development of Papua New Guinea
NASA Astrophysics Data System (ADS)
Kinnell, G. H.
Geographic and demographic features of Papua New Guinea are summarized, together with current applications of photovoltaic (PV) systems. The PV systems displace the increasing costs of generating power from diesel and kerosene powered units. PV systems power air navigation aids for the extensive air transport used in the absence of a road system. Remote television and visual aid education is possible with PV modules. A total of 50 kW of PV power is presently implemented, with the bulk dedicated to microwave repeater stations, navigation aids, and radio and lighting supplies. A village pumping installation is in operation, as are office lighting and ventilation, house lighting, and construction camp lighting. Another 350 kW is planned for the next 10 yr to run medical supply refrigeration, and further growth is seen for coupling with government-developed village lighting kits that feature industrial reflectors.
The Impact of Accelerated Promotion Rates on Drill Sergeant Performance
2011-01-01
land navigation, communication (voice/visual), NBC protection). I have good knowledge of most Warrior tasks; I have sufficient skills to handle...but seldom reach out on my own initiative. I communicate and work well with others regardless of background; I encourage attitudes of tolerance and...most of the Warrior tasks (e.g., land navigation, communication (voice/visual), NBC protection). I have good knowledge of most Warrior tasks; I
Olfaction Contributes to Pelagic Navigation in a Coastal Shark.
Nosal, Andrew P; Chao, Yi; Farrara, John D; Chai, Fei; Hastings, Philip A
2016-01-01
How animals navigate the constantly moving and visually uniform pelagic realm, often along straight paths between distant sites, is an enduring mystery. The mechanisms enabling pelagic navigation in cartilaginous fishes are particularly understudied. We used shoreward navigation by leopard sharks (Triakis semifasciata) as a model system to test whether olfaction contributes to pelagic navigation. Leopard sharks were captured alongshore, transported 9 km offshore, released, and acoustically tracked for approximately 4 h each until the transmitter released. Eleven sharks were rendered anosmic (nares occluded with cotton wool soaked in petroleum jelly); fifteen were sham controls. Mean swimming depth was 28.7 m. On average, tracks of control sharks ended 62.6% closer to shore, following relatively straight paths that were significantly directed over spatial scales exceeding 1600 m. In contrast, tracks of anosmic sharks ended 37.2% closer to shore, following significantly more tortuous paths that approximated correlated random walks. These results held after swimming paths were adjusted for current drift. This is the first study to demonstrate experimentally that olfaction contributes to pelagic navigation in sharks, likely mediated by chemical gradients as has been hypothesized for birds. Given the similarities between the fluid three-dimensional chemical atmosphere and ocean, further research comparing swimming and flying animals may lead to a unifying paradigm explaining their extraordinary navigational abilities.
Boccia, M; Piccardi, L; Palermo, L; Nemmi, F; Sulpizio, V; Galati, G; Guariglia, C
2014-09-05
Visual mental imagery is a process that draws on different cognitive abilities and is affected by the contents of mental images. Several studies have demonstrated that different brain areas subtend the mental imagery of navigational and non-navigational contents. Here, we set out to determine whether there are distinct representations for navigational and geographical images. Specifically, we used a Spatial Compatibility Task (SCT) to assess the mental representation of a familiar navigational space (the campus), a familiar geographical space (the map of Italy) and familiar objects (the clock). Twenty-one participants judged whether the vertical or the horizontal arrangement of items was correct. We found that distinct representational strategies were preferred to solve different categories on the SCT, namely, the horizontal perspective for the campus and the vertical perspective for the clock and the map of Italy. Furthermore, we found significant effects due to individual differences in the vividness of mental images and in preferences for verbal versus visual strategies, which selectively affect the contents of mental images. Our results suggest that imagining a familiar navigational space is somewhat different from imagining a familiar geographical space. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Combined Feature Based and Shape Based Visual Tracker for Robot Navigation
NASA Technical Reports Server (NTRS)
Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.
2005-01-01
We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.
Amicuzi, Ileana; Stortini, Massimo; Petrarca, Maurizio; Di Giulio, Paola; Di Rosa, Giuseppe; Fariello, Giuseppe; Longo, Daniela; Cannatà, Vittorio; Genovese, Elisabetta; Castelli, Enrico
2006-10-01
We report the case of a 4.6-year-old girl born pre-term with early bilateral occipital damage. It was revealed that the child had non-severely impaired basic visual abilities and ocular motility, a selective perceptual deficit of figure-ground segregation, impaired visual recognition and abnormal navigating through space. Even if the child's visual functioning was not optimal, this was the expression of adaptive anatomic and functional brain modifications that occurred following the early lesion. Anatomic brain structure was studied with anatomic MRI and Diffusor Tensor Imaging (DTI)-MRI. This behavioral study may provide an important contribution to understanding the impact of an early lesion of the visual system on the development of visual functions and on the immature brain's potential for reorganisation related to when the damage occurred.
The Trans-Visible Navigator: A See-Through Neuronavigation System Using Augmented Reality.
Watanabe, Eiju; Satoh, Makoto; Konno, Takehiko; Hirai, Masahiro; Yamaguchi, Takashi
2016-03-01
The neuronavigator has become indispensable for brain surgery and works in the manner of point-to-point navigation. Because the positional information is indicated on a personal computer (PC) monitor, surgeons are required to rotate the dimension of the magnetic resonance imaging/computed tomography scans to match the surgical field. In addition, they must frequently alternate their gaze between the surgical field and the PC monitor. To overcome these difficulties, we developed an augmented reality-based navigation system with whole-operation-room tracking. A tablet PC is used for visualization. The patient's head is captured by the back-face camera of the tablet. Three-dimensional images of intracranial structures are extracted from magnetic resonance imaging/computed tomography and are superimposed on the video image of the head. When viewed from various directions around the head, intracranial structures are displayed with corresponding angles as viewed from the camera direction, thus giving the surgeon the sensation of seeing through the head. Whole-operation-room tracking is realized using a VICON tracking system with 6 cameras. A phantom study showed a spatial resolution of about 1 mm. The present system was evaluated in 6 patients who underwent tumor resection surgery, and we showed that the system is useful for planning skin incisions as well as craniotomy and the localization of superficial tumors. The main advantage of the present system is that it achieves volumetric navigation in contrast to conventional point-to-point navigation. It extends augmented reality images directly onto real surgical images, thus helping the surgeon to integrate these 2 dimensions intuitively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Alderliesten, Tanja; Loo, Claudette; Paape, Anita; Muller, Sara; Rutgers, Emiel; Peeters, Marie-Jeanne Vrancken; Gilhuijs, Kenneth
2010-06-01
The aim of this study was to investigate the feasibility of image-guided navigation approaches to demarcate breast cancer on the basis of preacquired magnetic resonance (MR) imaging in supine patient orientation. Strategies were examined to minimize the uncertainty in the instrument-tip position, based on the hypothesis that the release of instrument pressure returns the breast tissue to its predeformed state. For this purpose, four sources of uncertainty were taken into account: (1) U(ligaments): Uncertainty in the reproducibility of the internal mammary gland geometry during repeat patient setup in supine orientation; (2) U(r_breathing): Residual uncertainty in registration of the breast after compensation for breathing motion using an external marker; (3) U(reconstruction): Uncertainty in the reconstructed location of the tip of the needle using an optical image-navigation system (phantom experiments, n = 50); and (4) U(deformation): Uncertainty in displacement of breast tumors due to needle-induced tissue deformations (patients, n = 21). A Monte Carlo study was performed to establish the 95% confidence interval (CI) of the combined uncertainties. This region of uncertainty was subsequently visualized around the reconstructed needle tip as an additional navigational aid in the preacquired MR images. Validation of the system was performed in five healthy volunteers (localization of skin markers only) and in two patients. In the patients, the navigation system was used to monitor ultrasound-guided radioactive seed localization of breast cancer. Nearest distances between the needle tip and the tumor boundary in the ultrasound images were compared to those in the concurrently reconstructed MR images. Both U(reconstruction) and U(deformation) were normally distributed with 0.1 +/- 1.2 mm (mean +/- 1 SD) and 0.1 +/- 0.8 mm, respectively. Taking prior estimates for U(ligaments) (0.0 +/- 1.5 mm) and U(r_breathing) (-0.1 +/- 0.6 mm) into account, the combined impact resulted in 3.9 mm uncertainty in the position of the needle tip (95% CI) after release of pressure. The volunteer study showed a targeting accuracy comparable to that in the phantom experiments: 2.9 +/- 1.3 versus 2.7 +/- 1.1 mm, respectively. In the patient feasibility study, the deviations were within the 3.9 mm CI. Image-guided navigation to demarcate breast cancer on the basis of preacquired MR images in supine orientation appears feasible if patient breathing is tracked during the navigation procedure, positional uncertainty is visualized and pressure on the localization instrument is released prior to verification of its position.
Simple Smartphone-Based Guiding System for Visually Impaired People
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-01-01
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them. PMID:28608811
Simple Smartphone-Based Guiding System for Visually Impaired People.
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-06-13
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.
Underwater and surface behavior of homing juvenile northern elephant seals.
Matsumura, Moe; Watanabe, Yuuki Y; Robinson, Patrick W; Miller, Patrick J O; Costa, Daniel P; Miyazaki, Nobuyuki
2011-02-15
Northern elephant seals, Mirounga angustirostris, travel between colonies along the west coast of North America and foraging areas in the North Pacific. They also have the ability to return to their home colony after being experimentally translocated. However, the mechanisms of this navigation are not known. Visual information could serve an important role in navigation, either primary or supplementary. We examined the role of visual cues in elephant seal navigation by translocating three seals and recording their heading direction continuously using GPS, and acceleration and geomagnetic data loggers while they returned to the colony. The seals first reached the coast and then proceeded to the colony by swimming along the coast. While underwater the animals exhibited a horizontally straight course (mean net-to-gross displacement ratio=0.94±0.02). In contrast, while at the surface they changed their headings up to 360 deg. These results are consistent with the use of visual cues for navigation to the colony. The seals may visually orient by using landmarks as they swim along the coast. We further assessed whether the seals could maintain a consistent heading while underwater during drift dives where one might expect that passive spiraling during drift dives could cause disorientation. However, seals were able to maintain the initial course heading even while underwater during drift dives where there was spiral motion (to within 20 deg). This behavior may imply the use of non-visual cues such as acoustic signals or magnetic fields for underwater orientation.
Visualizing Dynamic Weather and Ocean Data in Google Earth
NASA Astrophysics Data System (ADS)
Castello, C.; Giencke, P.
2008-12-01
Katrina. Climate change. Rising sea levels. Low lake levels. These headliners, and countless others like them, underscore the need to better understand our changing oceans and lakes. Over the past decade, efforts such as the Global Ocean Observing System (GOOS) have added to this understanding, through the creation of interoperable ocean observing systems. These systems, including buoy networks, gliders, UAV's, etc, have resulted in a dramatic increase in the amount of Earth observation data available to the public. Unfortunately, these data tend to be restrictive to mass consumption, owing to large file sizes, incompatible formats, and/or a dearth of user friendly visualization software. Google Earth offers a flexible way to visualize Earth observation data. Marrying high resolution orthoimagery, user friendly query and navigation tools, and the power of OGC's KML standard, Google Earth can make observation data universally understandable and accessible. This presentation will feature examples of meteorological and oceanographic data visualized using KML and Google Earth, along with tools and tips for integrating other such environmental datasets.
Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system.
Aronov, Dmitriy; Tank, David W
2014-10-22
Virtual reality (VR) enables precise control of an animal's environment and otherwise impossible experimental manipulations. Neural activity in rodents has been studied on virtual 1D tracks. However, 2D navigation imposes additional requirements, such as the processing of head direction and environment boundaries, and it is unknown whether the neural circuits underlying 2D representations can be sufficiently engaged in VR. We implemented a VR setup for rats, including software and large-scale electrophysiology, that supports 2D navigation by allowing rotation and walking in any direction. The entorhinal-hippocampal circuit, including place, head direction, and grid cells, showed 2D activity patterns similar to those in the real world. Furthermore, border cells were observed, and hippocampal remapping was driven by environment shape, suggesting functional processing of virtual boundaries. These results illustrate that 2D spatial representations can be engaged by visual and rotational vestibular stimuli alone and suggest a novel VR tool for studying rat navigation.
The Development of the Navigation System for Visually Impaired Persons
2001-10-25
method) is used in our system. In this papaer , we re- fer to the developed methods which are the positioning method without DGPS and the method of the...University Kanagawa, Japan Performing Organization Report Number Sponsoring/Monitoring Agency Name(s) and Address(es) US Army Research , Development...impaired. ACKNOWLEDGMENT This research was partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scienti c
Object Persistence Enhances Spatial Navigation: A Case Study in Smartphone Vision Science.
Liverence, Brandon M; Scholl, Brian J
2015-07-01
Violations of spatiotemporal continuity disrupt performance in many tasks involving attention and working memory, but experiments on this topic have been limited to the study of moment-by-moment on-line perception, typically assessed by passive monitoring tasks. We tested whether persisting object representations also serve as underlying units of longer-term memory and active spatial navigation, using a novel paradigm inspired by the visual interfaces common to many smartphones. Participants used key presses to navigate through simple visual environments consisting of grids of icons (depicting real-world objects), only one of which was visible at a time through a static virtual window. Participants found target icons faster when navigation involved persistence cues (via sliding animations) than when persistence was disrupted (e.g., via temporally matched fading animations), with all transitions inspired by smartphone interfaces. Moreover, this difference occurred even after explicit memorization of the relevant information, which demonstrates that object persistence enhances spatial navigation in an automatic and irresistible fashion. © The Author(s) 2015.
Brighton, Caroline H.; Thomas, Adrian L. R.
2017-01-01
The ability to intercept uncooperative targets is key to many diverse flight behaviors, from courtship to predation. Previous research has looked for simple geometric rules describing the attack trajectories of animals, but the underlying feedback laws have remained obscure. Here, we use GPS loggers and onboard video cameras to study peregrine falcons, Falco peregrinus, attacking stationary targets, maneuvering targets, and live prey. We show that the terminal attack trajectories of peregrines are not described by any simple geometric rule as previously claimed, and instead use system identification techniques to fit a phenomenological model of the dynamical system generating the observed trajectories. We find that these trajectories are best—and exceedingly well—modeled by the proportional navigation (PN) guidance law used by most guided missiles. Under this guidance law, turning is commanded at a rate proportional to the angular rate of the line-of-sight between the attacker and its target, with a constant of proportionality (i.e., feedback gain) called the navigation constant (N). Whereas most guided missiles use navigation constants falling on the interval 3 ≤ N ≤ 5, peregrine attack trajectories are best fitted by lower navigation constants (median N < 3). This lower feedback gain is appropriate at the lower flight speed of a biological system, given its presumably higher error and longer delay. This same guidance law could find use in small visually guided drones designed to remove other drones from protected airspace. PMID:29203660
Brighton, Caroline H; Thomas, Adrian L R; Taylor, Graham K
2017-12-19
The ability to intercept uncooperative targets is key to many diverse flight behaviors, from courtship to predation. Previous research has looked for simple geometric rules describing the attack trajectories of animals, but the underlying feedback laws have remained obscure. Here, we use GPS loggers and onboard video cameras to study peregrine falcons, Falco peregrinus , attacking stationary targets, maneuvering targets, and live prey. We show that the terminal attack trajectories of peregrines are not described by any simple geometric rule as previously claimed, and instead use system identification techniques to fit a phenomenological model of the dynamical system generating the observed trajectories. We find that these trajectories are best-and exceedingly well-modeled by the proportional navigation (PN) guidance law used by most guided missiles. Under this guidance law, turning is commanded at a rate proportional to the angular rate of the line-of-sight between the attacker and its target, with a constant of proportionality (i.e., feedback gain) called the navigation constant ( N ). Whereas most guided missiles use navigation constants falling on the interval 3 ≤ N ≤ 5, peregrine attack trajectories are best fitted by lower navigation constants (median N < 3). This lower feedback gain is appropriate at the lower flight speed of a biological system, given its presumably higher error and longer delay. This same guidance law could find use in small visually guided drones designed to remove other drones from protected airspace. Copyright © 2017 the Author(s). Published by PNAS.
Bioinspired engineering of exploration systems for NASA and DoD
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Chahl, Javaan; Srinivasan, M. V.; Young, L.; Werblin, Frank; Hine, Butler; Zornetzer, Steven
2002-01-01
A new approach called bioinspired engineering of exploration systems (BEES) and its value for solving pressing NASA and DoD needs are described. Insects (for example honeybees and dragonflies) cope remarkably well with their world, despite possessing a brain containing less than 0.01% as many neurons as the human brain. Although most insects have immobile eyes with fixed focus optics and lack stereo vision, they use a number of ingenious, computationally simple strategies for perceiving their world in three dimensions and navigating successfully within it. We are distilling selected insect-inspired strategies to obtain novel solutions for navigation, hazard avoidance, altitude hold, stable flight, terrain following, and gentle deployment of payload. Such functionality provides potential solutions for future autonomous robotic space and planetary explorers. A BEES approach to developing lightweight low-power autonomous flight systems should be useful for flight control of such biomorphic flyers for both NASA and DoD needs. Recent biological studies of mammalian retinas confirm that representations of multiple features of the visual world are systematically parsed and processed in parallel. Features are mapped to a stack of cellular strata within the retina. Each of these representations can be efficiently modeled in semiconductor cellular nonlinear network (CNN) chips. We describe recent breakthroughs in exploring the feasibility of the unique blending of insect strategies of navigation with mammalian visual search, pattern recognition, and image understanding into hybrid biomorphic flyers for future planetary and terrestrial applications. We describe a few future mission scenarios for Mars exploration, uniquely enabled by these newly developed biomorphic flyers.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P.
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394
NASA Technical Reports Server (NTRS)
Oliver, B. M.; Gower, J. F. R.
1977-01-01
A data acquisition system using a Litton LTN-51 inertial navigation unit (INU) was tested and used for aircraft track recovery and for location and tracking from the air of targets at sea. The characteristic position drift of the INU is compensated for by sighting landmarks of accurately known position at discrete time intervals using a visual sighting system in the transparent nose of the Beechcraft 18 aircraft used. For an aircraft altitude of about 300 m, theoretical and experimental tests indicate that calculated aircraft and/or target positions obtained from the interpolated INU drift curve will be accurate to within 10 m for landmarks spaced approximately every 15 minutes in time. For applications in coastal oceanography, such as surface current mapping by tracking artificial targets, the system allows a broad area to be covered without use of high altitude photography and its attendant needs for large targets and clear weather.
Methods and Apparatus for Autonomous Robotic Control
NASA Technical Reports Server (NTRS)
Gorshechnikov, Anatoly (Inventor); Livitz, Gennady (Inventor); Versace, Massimiliano (Inventor); Palma, Jesse (Inventor)
2017-01-01
Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.
PubNet: a flexible system for visualizing literature derived networks
Douglas, Shawn M; Montelione, Gaetano T; Gerstein, Mark
2005-01-01
We have developed PubNet, a web-based tool that extracts several types of relationships returned by PubMed queries and maps them into networks, allowing for graphical visualization, textual navigation, and topological analysis. PubNet supports the creation of complex networks derived from the contents of individual citations, such as genes, proteins, Protein Data Bank (PDB) IDs, Medical Subject Headings (MeSH) terms, and authors. This feature allows one to, for example, examine a literature derived network of genes based on functional similarity. PMID:16168087
A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes
ERIC Educational Resources Information Center
Browning, N. Andrew; Grossberg, Stephen; Mingolla, Ennio
2009-01-01
Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard…
ERIC Educational Resources Information Center
Firat, Mehmet; Kabakci, Isil
2010-01-01
The interactional feature of hypermedia that allows high-level student-control is considered as one of the most important advantages that hypermedia provides for learning and teaching. However, high-level student control in hypermedia might not always lead to high-level learning performance. The learner is likely to experience navigation problems…
NASA Astrophysics Data System (ADS)
Rakas, J.; Nikolic, M.; Bauranov, A.
2017-12-01
Lightning storms are a serious hazard that can cause damage to vital human infrastructure. In aviation, lightning strikes cause outages to air traffic control equipment and facilities that result in major disruptions in the network, causing delays and financial costs measured in the millions of dollars. Failure of critical systems, such as Visual Navigational Aids (Visual NAVAIDS), are particularly dangerous since NAVAIDS are an essential part of landing procedures. Precision instrument approach, an operation utilized during the poor visibility conditions, utilizes several of these systems, and their failure leads to holding patterns and ultimately diversions to other airports. These disruptions lead to both ground and airborne delay. Accurate prediction of these outages and their costs is a key prerequisite for successful investment planning. The air traffic management and control sector need accurate information to successfully plan maintenance and develop a more robust system under the threat of increasing lightning rates. To analyze the issue, we couple the Remote Monitoring and Logging System (RMLS) database and the Aviation System Performance Metrics (ASPM) databases to identify lightning-induced outages, and connect them with weather conditions, demand and landing runway to calculate the total delays induced by the outages, as well as the number of cancellations and diversions. The costs are then determined by calculating direct costs to aircraft operators and costs of passengers' time for delays, cancellations and diversions. The results indicate that 1) not all NAVAIDS are created equal, and 2) outside conditions matter. The cost of an outage depends on the importance of the failed system and the conditions that prevailed before, during and after the failure. The outage that occurs during high demand and poor weather conditions is more likely to result in more delays and higher costs.
Image-Aided Navigation Using Cooperative Binocular Stereopsis
2014-03-27
Global Postioning System . . . . . . . . . . . . . . . . . . . . . . . . . 1 IMU Inertial Measurement Unit...an intertial measurement unit ( IMU ). This technique capitalizes on an IMU’s ability to capture quick motion and the ability of GPS to constrain long...the sensor-aided IMU framework. Visual sensors provide a number of benefits, such as low cost and weight. These sensors are also able to measure
Creating Accessible Science Museums with User-Activated Environmental Audio Beacons (Ping!)
ERIC Educational Resources Information Center
Landau, Steven; Wiener, William; Naghshineh, Koorosh; Giusti, Ellen
2005-01-01
In 2003, Touch Graphics Company carried out research on a new invention that promises to improve accessibility to science museums for visitors who are visually impaired. The system, nicknamed Ping!, allows users to navigate an exhibit area, listen to audio descriptions, and interact with exhibits using a cell phone-based interface. The system…
Independent Deficits of Visual Word and Motion Processing in Aging and Early Alzheimer's Disease
Velarde, Carla; Perelstein, Elizabeth; Ressmann, Wendy; Duffy, Charles J.
2013-01-01
We tested whether visual processing impairments in aging and Alzheimer's disease (AD) reflect uniform posterior cortical decline, or independent disorders of visual processing for reading and navigation. Young and older normal controls were compared to early AD patients using psychophysical measures of visual word and motion processing. We find elevated perceptual thresholds for letters and word discrimination from young normal controls, to older normal controls, to early AD patients. Across subject groups, visual motion processing showed a similar pattern of increasing thresholds, with the greatest impact on radial pattern motion perception. Combined analyses show that letter, word, and motion processing impairments are independent of each other. Aging and AD may be accompanied by independent impairments of visual processing for reading and navigation. This suggests separate underlying disorders and highlights the need for comprehensive evaluations to detect early deficits. PMID:22647256
Murray, Trevor; Zeil, Jochen
2017-01-01
Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its 'catchment area') has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the 'catchment volumes' within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots.
Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes
Zeil, Jochen
2017-01-01
Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its ‘catchment area’) has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the ‘catchment volumes’ within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots. PMID:29088300
Navigated MRI-guided liver biopsies in a closed-bore scanner: experience in 52 patients.
Moche, Michael; Heinig, Susann; Garnov, Nikita; Fuchs, Jochen; Petersen, Tim-Ole; Seider, Daniel; Brandmaier, Philipp; Kahn, Thomas; Busse, Harald
2016-08-01
To evaluate clinical effectiveness and diagnostic efficiency of a navigation device for MR-guided biopsies of focal liver lesions in a closed-bore scanner. In 52 patients, 55 biopsies were performed. An add-on MR navigation system with optical instrument tracking was used for image guidance and biopsy device insertion outside the bore. Fast control imaging allowed visualization of the true needle position at any time. The biopsy workflow and procedure duration were recorded. Histological analysis and clinical course/outcome were used to calculate sensitivity, specificity and diagnostic accuracy. Fifty-four of 55 liver biopsies were performed successfully with the system. No major and four minor complications occurred. Mean tumour size was 23 ± 14 mm and the skin-to-target length ranged from 22 to 177 mm. In 39 cases, access path was double oblique. Sensitivity, specificity and diagnostic accuracy were 88 %, 100 % and 92 %, respectively. The mean procedure time was 51 ± 12 min, whereas the puncture itself lasted 16 ± 6 min. On average, four control scans were taken. Using this navigation device, biopsies of poorly visible and difficult accessible liver lesions could be performed safely and reliably in a closed-bore MRI scanner. The system can be easily implemented in clinical routine workflow. • Targeted liver biopsies could be reliably performed in a closed-bore MRI. • The navigation system allows for image guidance outside of the scanner bore. • Assisted MRI-guided biopsies are helpful for focal lesions with a difficult access. • Successful integration of the method in clinical workflow was shown. • Subsequent system installation in an existing MRI environment is feasible.
Helicopter pilot scan techniques during low-altitude high-speed flight.
Kirby, Christopher E; Kennedy, Quinn; Yang, Ji Hyun
2014-07-01
This study examined pilots' visual scan patterns during a simulated high-speed, low-level flight and how their scan rates related to flight performance. As helicopters become faster and more agile, pilots are expected to navigate at low altitudes while traveling at high speeds. A pilot's ability to interpret information from a combination of visual sources determines not only mission success, but also aircraft and crew survival. In a fixed-base helicopter simulator modeled after the U.S. Navy's MH-60S, 17 active-duty Navy helicopter pilots with varying total flight times flew and navigated through a simulated southern Californian desert course. Pilots' scan rate and fixation locations were monitored using an eye-tracking system while they flew through the course. Flight parameters, including altitude, were recorded using the simulator's recording system. Experienced pilots with more than 1000 total flight hours better maintained a constant altitude (mean altitude deviation = 48.52 ft, SD = 31.78) than less experienced pilots (mean altitude deviation = 73.03 ft, SD = 10.61) and differed in some aspects of their visual scans. They spent more time looking at the instrument display and less time looking out the window (OTW) than less experienced pilots. Looking OTW was associated with less consistency in maintaining altitude. Results may aid training effectiveness specific to helicopter aviation, particularly in high-speed low-level flight conditions.
Developing Visualization Techniques for Semantics-based Information Networks
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Hall, David R.
2003-01-01
Information systems incorporating complex network structured information spaces with a semantic underpinning - such as hypermedia networks, semantic networks, topic maps, and concept maps - are being deployed to solve some of NASA s critical information management problems. This paper describes some of the human interaction and navigation problems associated with complex semantic information spaces and describes a set of new visual interface approaches to address these problems. A key strategy is to leverage semantic knowledge represented within these information spaces to construct abstractions and views that will be meaningful to the human user. Human-computer interaction methodologies will guide the development and evaluation of these approaches, which will benefit deployed NASA systems and also apply to information systems based on the emerging Semantic Web.
Code of Federal Regulations, 2011 CFR
2011-07-01
....65 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY: STANDARDS FOR CONDUCTING ALL...
Code of Federal Regulations, 2014 CFR
2014-07-01
....65 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY: STANDARDS FOR CONDUCTING ALL...
Code of Federal Regulations, 2012 CFR
2012-07-01
....65 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY: STANDARDS FOR CONDUCTING ALL...
Code of Federal Regulations, 2013 CFR
2013-07-01
....65 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY: STANDARDS FOR CONDUCTING ALL...
Autonomous assistance navigation for robotic wheelchairs in confined spaces.
Cheein, Fernando Auat; Carelli, Ricardo; De la Cruz, Celso; Muller, Sandra; Bastos Filho, Teodiano F
2010-01-01
In this work, a visual interface for the assistance of a robotic wheelchair's navigation is presented. The visual interface is developed for the navigation in confined spaces such as narrows corridors or corridor-ends. The interface performs two navigation modus: non-autonomous and autonomous. The non-autonomous driving of the robotic wheelchair is made by means of a hand-joystick. The joystick directs the motion of the vehicle within the environment. The autonomous driving is performed when the user of the wheelchair has to turn (90, 90 or 180 degrees) within the environment. The turning strategy is performed by a maneuverability algorithm compatible with the kinematics of the wheelchair and by the SLAM (Simultaneous Localization and Mapping) algorithm. The SLAM algorithm provides the interface with the information concerning the environment disposition and the pose -position and orientation-of the wheelchair within the environment. Experimental and statistical results of the interface are also shown in this work.
Age-related similarities and differences in monitoring spatial cognition.
Ariel, Robert; Moffat, Scott D
2018-05-01
Spatial cognitive performance is impaired in later adulthood but it is unclear whether the metacognitive processes involved in monitoring spatial cognitive performance are also compromised. Inaccurate monitoring could affect whether people choose to engage in tasks that require spatial thinking and also the strategies they use in spatial domains such as navigation. The current experiment examined potential age differences in monitoring spatial cognitive performance in a variety of spatial domains including visual-spatial working memory, spatial orientation, spatial visualization, navigation, and place learning. Younger and older adults completed a 2D mental rotation test, 3D mental rotation test, paper folding test, spatial memory span test, two virtual navigation tasks, and a cognitive mapping test. Participants also made metacognitive judgments of performance (confidence judgments, judgments of learning, or navigation time estimates) on each trial for all spatial tasks. Preference for allocentric or egocentric navigation strategies was also measured. Overall, performance was poorer and confidence in performance was lower for older adults than younger adults. In most spatial domains, the absolute and relative accuracy of metacognitive judgments was equivalent for both age groups. However, age differences in monitoring accuracy (specifically relative accuracy) emerged in spatial tasks involving navigation. Confidence in navigating for a target location also mediated age differences in allocentric navigation strategy use. These findings suggest that with the possible exception of navigation monitoring, spatial cognition may be spared from age-related decline even though spatial cognition itself is impaired in older age.
Autonomous vision-based navigation for proximity operations around binary asteroids
NASA Astrophysics Data System (ADS)
Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo
2018-02-01
Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.
Navigating surgical fluorescence cameras using near-infrared optical tracking.
van Oosterom, Matthias; den Houting, David; van de Velde, Cornelis; van Leeuwen, Fijs
2018-05-01
Fluorescence guidance facilitates real-time intraoperative visualization of the tissue of interest. However, due to attenuation, the application of fluorescence guidance is restricted to superficial lesions. To overcome this shortcoming, we have previously applied three-dimensional surgical navigation to position the fluorescence camera in reach of the superficial fluorescent signal. Unfortunately, in open surgery, the near-infrared (NIR) optical tracking system (OTS) used for navigation also induced an interference during NIR fluorescence imaging. In an attempt to support future implementation of navigated fluorescence cameras, different aspects of this interference were characterized and solutions were sought after. Two commercial fluorescence cameras for open surgery were studied in (surgical) phantom and human tissue setups using two different NIR OTSs and one OTS simulating light-emitting diode setup. Following the outcome of these measurements, OTS settings were optimized. Measurements indicated the OTS interference was caused by: (1) spectral overlap between the OTS light and camera, (2) OTS light intensity, (3) OTS duty cycle, (4) OTS frequency, (5) fluorescence camera frequency, and (6) fluorescence camera sensitivity. By optimizing points 2 to 4, navigation of fluorescence cameras during open surgery could be facilitated. Optimization of the OTS and camera compatibility can be used to support navigated fluorescence guidance concepts. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Autonomous vision-based navigation for proximity operations around binary asteroids
NASA Astrophysics Data System (ADS)
Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo
2018-06-01
Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.
Implementation of a virtual laryngoscope system using efficient reconstruction algorithms.
Luo, Shouhua; Yan, Yuling
2009-08-01
Conventional fiberoptic laryngoscope may cause discomfort to the patient and in some cases it can lead to side effects that include perforation, infection and hemorrhage. Virtual laryngoscopy (VL) can overcome this problem and further it may lower the risk of operation failures. Very few virtual endoscope (VE) based investigations of the larynx have been described in the literature. CT data sets from a healthy subject were used for the VL studies. An algorithm of preprocessing and region-growing for 3-D image segmentation is developed. An octree based approach is applied in our VL system which facilitates a rapid construction of iso-surfaces. Some locating techniques are used for fast rendering and navigation (fly-through). Our VL visualization system provides for real time and efficient 'fly-through' navigation. The virtual camera can be arranged so that it moves along the airway in either direction. Snap shots were taken during fly-throughs. The system can automatically adjust the direction of the virtual camera and prevent collisions of the camera and the wall of the airway. A virtual laryngoscope (VL) system using OpenGL (Open Graphics Library) platform for interactive rendering and 3D visualization of the laryngeal framework and upper airway is established. OpenGL is supported on major operating systems and works with every major windowing system. The VL system runs on regular PC workstations and was successfully tested and evaluated using CT data from a normal subject.
Soft computing-based terrain visual sensing and data fusion for unmanned ground robotic systems
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir
2006-05-01
In this paper, we have primarily discussed technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain visual clues. The Kalman Filtering technique is applied for aggregative fusion of sub-terrain assessment results. The last two terrain classifiers are shown to have remarkable capability for terrain traversability assessment of natural terrains. We have conducted a comparative performance evaluation of all three terrain classifiers and presented the results in this paper.
NASA Astrophysics Data System (ADS)
Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.
2001-05-01
We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.
SAVA 3: A testbed for integration and control of visual processes
NASA Technical Reports Server (NTRS)
Crowley, James L.; Christensen, Henrik
1994-01-01
The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.
Surprising characteristics of visual systems of invertebrates.
González-Martín-Moro, J; Hernández-Verdejo, J L; Jiménez-Gahete, A E
2017-01-01
To communicate relevant and striking aspects about the visual system of some close invertebrates. Review of the related literature. The capacity of snails to regenerate a complete eye, the benefit of the oval shape of the compound eye of many flying insects as a way of stabilising the image during flight, the potential advantages related to the extreme refractive error that characterises the ocelli of many insects, as well as the ability to detect polarised light as a navigation system, are some of the surprising capabilities present in the small invertebrate eyes that are described in this work. The invertebrate eyes have capabilities and sensorial modalities that are not present in the human eye. The study of the eyes of these animals can help us to improve our understanding of our visual system, and inspire the development of optical devices. Copyright © 2016 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.
Kohlmeier, Carsten; Behrens, Peter; Böger, Andreas; Ramachandran, Brinda; Caparso, Anthony; Schulze, Dirk; Stude, Philipp; Heiland, Max; Assaf, Alexandre T
2017-12-01
The ATI SPG microstimulator is designed to be fixed on the posterior maxilla, with the integrated lead extending into the pterygopalatine fossa to electrically stimulate the sphenopalatine ganglion (SPG) as a treatment for cluster headache. Preoperative surgical planning to ensure the placement of the microstimulator in close proximity (within 5 mm) to the SPG is critical for treatment efficacy. The aim of this study was to improve the surgical procedure by navigating the initial dissection prior to implantation using a passive optical navigation system and to match the post-operative CBCT images with the preoperative treatment plan to verify the accuracy of the intraoperative placement of the microstimulator. Custom methods and software were used that result in a 3D rotatable digitally reconstructed fluoroscopic image illustrating the patient-specific placement with the ATI SPG microstimulator. Those software tools were preoperatively integrated with the planning software of the navigation system to be used intraoperatively for navigated placement. Intraoperatively, the SPG microstimulator was implanted by completing the initial dissection with CT navigation, while the final position of the stimulator was verified by 3D CBCT. Those reconstructed images were then immediately matched with the preoperative CT scans with the digitally inserted SPG microstimulator. This method allowed for visual comparison of both CT scans and verified correct positioning of the SPG microstimulator. Twenty-four surgeries were performed using this new method of CT navigated assistance during SPG microstimulator implantation. Those results were compared to results of 21 patients previously implanted without the assistance of CT navigation. Using CT navigation during the initial dissection, an average distance reduction of 1.2 mm between the target point and electrode tip of the SPG microstimulator was achieved. Using the navigation software for navigated implantation and matching the preoperative planned scans with those performed post-operatively, the average distance was 2.17 mm with navigation, compared to 3.37 mm in the 28 surgeries without navigation. Results from this new procedure showed a significant reduction (p = 0.009) in the average distance from the SPG microstimulator to the desired target point. Therefore, a distinct improvement could be achieved in positioning of the SPG microstimulator through the use of intraoperative navigation during the initial dissection and by post-operative matching of pre- and post-operatively performed CBCT scans.
[Basic concept in computer assisted surgery].
Merloz, Philippe; Wu, Hao
2006-03-01
To investigate application of medical digital imaging systems and computer technologies in orthopedics. The main computer-assisted surgery systems comprise the four following subcategories. (1) A collection and recording process for digital data on each patient, including preoperative images (CT scans, MRI, standard X-rays), intraoperative visualization (fluoroscopy, ultrasound), and intraoperative position and orientation of surgical instruments or bone sections (using 3D localises). Data merging based on the matching of preoperative imaging (CT scans, MRI, standard X-rays) and intraoperative visualization (anatomical landmarks, or bone surfaces digitized intraoperatively via 3D localiser; intraoperative ultrasound images processed for delineation of bone contours). (2) In cases where only intraoperative images are used for computer-assisted surgical navigation, the calibration of the intraoperative imaging system replaces the merged data system, which is then no longer necessary. (3) A system that provides aid in decision-making, so that the surgical approach is planned on basis of multimodal information: the interactive positioning of surgical instruments or bone sections transmitted via pre- or intraoperative images, display of elements to guide surgical navigation (direction, axis, orientation, length and diameter of a surgical instrument, impingement, etc. ). And (4) A system that monitors the surgical procedure, thereby ensuring that the optimal strategy defined at the preoperative stage is taken into account. It is possible that computer-assisted orthopedic surgery systems will enable surgeons to better assess the accuracy and reliability of the various operative techniques, an indispensable stage in the optimization of surgery.
Augmented Endoscopic Images Overlaying Shape Changes in Bone Cutting Procedures.
Nakao, Megumi; Endo, Shota; Nakao, Shinichi; Yoshida, Munehito; Matsuda, Tetsuya
2016-01-01
In microendoscopic discectomy for spinal disorders, bone cutting procedures are performed in tight spaces while observing a small portion of the target structures. Although optical tracking systems are able to measure the tip of the surgical tool during surgery, the poor shape information available during surgery makes accurate cutting difficult, even if preoperative computed tomography and magnetic resonance images are used for reference. Shape estimation and visualization of the target structures are essential for accurate cutting. However, time-varying shape changes during cutting procedures are still challenging issues for intraoperative navigation. This paper introduces a concept of endoscopic image augmentation that overlays shape changes to support bone cutting procedures. This framework handles the history of the location of the measured drill tip as a volume label and visualizes the remains to be cut overlaid on the endoscopic image in real time. A cutting experiment was performed with volunteers, and the feasibility of this concept was examined using a clinical navigation system. The efficacy of the cutting aid was evaluated with respect to the shape similarity, total moved distance of a cutting tool, and required cutting time. The results of the experiments showed that cutting performance was significantly improved by the proposed framework.
Real-time synthetic vision cockpit display for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Smith, W. Garth; Rybacki, Richard M.
1999-07-01
Low cost, high performance graphics solutions based on PC hardware platforms are now capable of rendering synthetic vision of a pilot's out-the-window view during all phases of flight. When coupled to a GPS navigation payload the virtual image can be fully correlated to the physical world. In particular, differential GPS services such as the Wide Area Augmentation System WAAS will provide all aviation users with highly accurate 3D navigation. As well, short baseline GPS attitude systems are becoming a viable and inexpensive solution. A glass cockpit display rendering geographically specific imagery draped terrain in real-time can be coupled with high accuracy (7m 95% positioning, sub degree pointing), high integrity (99.99999% position error bound) differential GPS navigation/attitude solutions to provide both situational awareness and 3D guidance to (auto) pilots throughout en route, terminal area, and precision approach phases of flight. This paper describes the technical issues addressed when coupling GPS and glass cockpit displays including the navigation/display interface, real-time 60 Hz rendering of terrain with multiple levels of detail under demand paging, and construction of verified terrain databases draped with geographically specific satellite imagery. Further, on-board recordings of the navigation solution and the cockpit display provide a replay facility for post-flight simulation based on live landings as well as synchronized multiple display channels with different views from the same flight. PC-based solutions which integrate GPS navigation and attitude determination with 3D visualization provide the aviation community, and general aviation in particular, with low cost high performance guidance and situational awareness in all phases of flight.
Hyperspace geography: visualizing fitness landscapes beyond 4D.
Wiles, Janet; Tonkes, Bradley
2006-01-01
Human perception is finely tuned to extract structure about the 4D world of time and space as well as properties such as color and texture. Developing intuitions about spatial structure beyond 4D requires exploiting other perceptual and cognitive abilities. One of the most natural ways to explore complex spaces is for a user to actively navigate through them, using local explorations and global summaries to develop intuitions about structure, and then testing the developing ideas by further exploration. This article provides a brief overview of a technique for visualizing surfaces defined over moderate-dimensional binary spaces, by recursively unfolding them onto a 2D hypergraph. We briefly summarize the uses of a freely available Web-based visualization tool, Hyperspace Graph Paper (HSGP), for exploring fitness landscapes and search algorithms in evolutionary computation. HSGP provides a way for a user to actively explore a landscape, from simple tasks such as mapping the neighborhood structure of different points, to seeing global properties such as the size and distribution of basins of attraction or how different search algorithms interact with landscape structure. It has been most useful for exploring recursive and repetitive landscapes, and its strength is that it allows intuitions to be developed through active navigation by the user, and exploits the visual system's ability to detect pattern and texture. The technique is most effective when applied to continuous functions over Boolean variables using 4 to 16 dimensions.
GPU-based multi-volume ray casting within VTK for medical applications.
Bozorgi, Mohammadmehdi; Lindseth, Frank
2015-03-01
Multi-volume visualization is important for displaying relevant information in multimodal or multitemporal medical imaging studies. The main objective with the current study was to develop an efficient GPU-based multi-volume ray caster (MVRC) and validate the proposed visualization system in the context of image-guided surgical navigation. Ray casting can produce high-quality 2D images from 3D volume data but the method is computationally demanding, especially when multiple volumes are involved, so a parallel GPU version has been implemented. In the proposed MVRC, imaginary rays are sent through the volumes (one ray for each pixel in the view), and at equal and short intervals along the rays, samples are collected from each volume. Samples from all the volumes are composited using front to back α-blending. Since all the rays can be processed simultaneously, the MVRC was implemented in parallel on the GPU to achieve acceptable interactive frame rates. The method is fully integrated within the visualization toolkit (VTK) pipeline with the ability to apply different operations (e.g., transformations, clipping, and cropping) on each volume separately. The implemented method is cross-platform (Windows, Linux and Mac OSX) and runs on different graphics card (NVidia and AMD). The speed of the MVRC was tested with one to five volumes of varying sizes: 128(3), 256(3), and 512(3). A Tesla C2070 GPU was used, and the output image size was 600 × 600 pixels. The original VTK single-volume ray caster and the MVRC were compared when rendering only one volume. The multi-volume rendering system achieved an interactive frame rate (> 15 fps) when rendering five small volumes (128 (3) voxels), four medium-sized volumes (256(3) voxels), and two large volumes (512(3) voxels). When rendering single volumes, the frame rate of the MVRC was comparable to the original VTK ray caster for small and medium-sized datasets but was approximately 3 frames per second slower for large datasets. The MVRC was successfully integrated in an existing surgical navigation system and was shown to be clinically useful during an ultrasound-guided neurosurgical tumor resection. A GPU-based MVRC for VTK is a useful tool in medical visualization. The proposed multi-volume GPU-based ray caster for VTK provided high-quality images at reasonable frame rates. The MVRC was effective when used in a neurosurgical navigation application.
Laboratory in the sky. New frontiers in measurements aloft
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spicer, C.W.; Kenny, D.V.; Shaw, W.J.
1994-09-01
This article describes a research aircraft for airborne measurements and the challenges that were overcome to deploy state-of-the-art measurement technology in an aircraft environment. We also focus on the chemical instrumentation and the recent addition of tandem mass spectrometry to the capabilities available for atmospheric characterization. The plane that we use to study atmospheric physical and chemical processes is a Grumman Gulfstream 1 (G-1), which is a twin-engine turboprop. The G-1 has a visual flight rule range exceeding 1500 nautical mi (endurance of about 6 h). It carries as much as 2800 lb of scientific payload with seats for fourmore » scientists and has a sampling speed range of 160-250 knots. The data acquisition system on the G-1 contains special interfaces to log data from a Long-Range Navigation system, the Global Positioning System, and an inertial navigation system, as well as particle measurement systems and other scientific probes. 3 refs., 7 figs., 2 tabs.« less
Alignment Jig for the Precise Measurement of THz Radiation
NASA Technical Reports Server (NTRS)
Javadi, Hamid H.
2009-01-01
A miniaturized instrumentation package comprising a (1) Global Positioning System (GPS) receiver, (2) an inertial measurement unit (IMU) consisting largely of surface-micromachined sensors of the microelectromechanical systems (MEMS) type, and (3) a microprocessor, all residing on a single circuit board, is part of the navigation system of a compact robotic spacecraft intended to be released from a larger spacecraft [e.g., the International Space Station (ISS)] for exterior visual inspection of the larger spacecraft. Variants of the package may also be useful in terrestrial collision-detection and -avoidance applications. The navigation solution obtained by integrating the IMU outputs is fed back to a correlator in the GPS receiver to aid in tracking GPS signals. The raw GPS and IMU data are blended in a Kalman filter to obtain an optimal navigation solution, which can be supplemented by range and velocity data obtained by use of (l) a stereoscopic pair of electronic cameras aboard the robotic spacecraft and/or (2) a laser dynamic range imager aboard the ISS. The novelty of the package lies mostly in those aspects of the design of the MEMS IMU that pertain to controlling mechanical resonances and stabilizing scale factors and biases.
A neural model of motion processing and visual navigation by cortical area MST.
Grossberg, S; Mingolla, E; Pack, C
1999-12-01
Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.
Desert ants learn vibration and magnetic landmarks.
Buehlmann, Cornelia; Hansson, Bill S; Knaden, Markus
2012-01-01
The desert ants Cataglyphis navigate not only by path integration but also by using visual and olfactory landmarks to pinpoint the nest entrance. Here we show that Cataglyphis noda can additionally use magnetic and vibrational landmarks as nest-defining cues. The magnetic field may typically provide directional rather than positional information, and vibrational signals so far have been shown to be involved in social behavior. Thus it remains questionable if magnetic and vibration landmarks are usually provided by the ants' habitat as nest-defining cues. However, our results point to the flexibility of the ants' navigational system, which even makes use of cues that are probably most often sensed in a different context.
Deictic primitives for general purpose navigation
NASA Technical Reports Server (NTRS)
Crismann, Jill D.
1994-01-01
A visually-based deictic primative used as an elementary command set for general purpose navigation was investigated. It was shown that a simple 'follow your eyes' scenario is sufficient for tracking a moving target. Limitations of velocity, acceleration, and modeling of the response of the mechanical systems were enforced. Realistic paths of the robots were produced during the simulation. Scientists could remotely command a planetary rover to go to a particular rock formation that may be interesting. Similarly an expert at plant maintenance could obtain diagnostic information remotely by using deictic primitives on a mobile are used in the deictic primitives, we could imagine that the exact same control software could be used for all of these applications.
A Google Glass navigation system for ultrasound and fluorescence dual-mode image-guided surgery
NASA Astrophysics Data System (ADS)
Zhang, Zeshu; Pei, Jing; Wang, Dong; Hu, Chuanzhen; Ye, Jian; Gan, Qi; Liu, Peng; Yue, Jian; Wang, Benzhong; Shao, Pengfei; Povoski, Stephen P.; Martin, Edward W.; Yilmaz, Alper; Tweedle, Michael F.; Xu, Ronald X.
2016-03-01
Surgical resection remains the primary curative intervention for cancer treatment. However, the occurrence of a residual tumor after resection is very common, leading to the recurrence of the disease and the need for re-resection. We develop a surgical Google Glass navigation system that combines near infrared fluorescent imaging and ultrasonography for intraoperative detection of sites of tumor and assessment of surgical resection boundaries, well as for guiding sentinel lymph node (SLN) mapping and biopsy. The system consists of a monochromatic CCD camera, a computer, a Google Glass wearable headset, an ultrasonic machine and an array of LED light sources. All the above components, except the Google Glass, are connected to a host computer by a USB or HDMI port. Wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A control program is written in C++ to call OpenCV functions for image calibration, processing and display. The technical feasibility of the system is tested in both tumor simulating phantoms and in a human subject. When the system is used for simulated phantom resection tasks, the tumor boundaries, invisible to the naked eye, can be clearly visualized with the surgical Google Glass navigation system. This system has also been used in an IRB approved protocol in a single patient during SLN mapping and biopsy in the First Affiliated Hospital of Anhui Medical University, demonstrating the ability to successfully localize and resect all apparent SLNs. In summary, our tumor simulating phantom and human subject studies have demonstrated the technical feasibility of successfully using the proposed goggle navigation system during cancer surgery.
SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, S; Zhao, S; Chen, Y
2014-06-01
Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method whilemore » the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed and quality of 3D reconstruction, the efficiency in dose planning and accuracy in navigation all can be improved simultaneously.« less
A component-based software environment for visualizing large macromolecular assemblies.
Sanner, Michel F
2005-03-01
The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.
Teleoperation of steerable flexible needles by combining kinesthetic and vibratory feedback.
Pacchierotti, Claudio; Abayazid, Momen; Misra, Sarthak; Prattichizzo, Domenico
2014-01-01
Needle insertion in soft-tissue is a minimally invasive surgical procedure that demands high accuracy. In this respect, robotic systems with autonomous control algorithms have been exploited as the main tool to achieve high accuracy and reliability. However, for reasons of safety and responsibility, autonomous robotic control is often not desirable. Therefore, it is necessary to focus also on techniques enabling clinicians to directly control the motion of the surgical tools. In this work, we address that challenge and present a novel teleoperated robotic system able to steer flexible needles. The proposed system tracks the position of the needle using an ultrasound imaging system and computes needle's ideal position and orientation to reach a given target. The master haptic interface then provides the clinician with mixed kinesthetic-vibratory navigation cues to guide the needle toward the computed ideal position and orientation. Twenty participants carried out an experiment of teleoperated needle insertion into a soft-tissue phantom, considering four different experimental conditions. Participants were provided with either mixed kinesthetic-vibratory feedback or mixed kinesthetic-visual feedback. Moreover, we considered two different ways of computing ideal position and orientation of the needle: with or without set-points. Vibratory feedback was found more effective than visual feedback in conveying navigation cues, with a mean targeting error of 0.72 mm when using set-points, and of 1.10 mm without set-points.
Toward Head-Up and Head-Worn Displays for Equivalent Visual Operations
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Arthur, Jarvis J.; Bailey, Randall E.; Shelton, Kevin J.; Kramer, Lynda J.; Jones, Denise R.; Williams, Steven P.; Harrison, Stephanie J.; Ellis, Kyle K.
2015-01-01
A key capability envisioned for the future air transportation system is the concept of equivalent visual operations (EVO). EVO is the capability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. Enhanced Flight Vision Systems (EFVS) offer a path to achieve EVO. NASA has successfully tested EFVS for commercial flight operations that has helped establish the technical merits of EFVS, without reliance on natural vision, to runways without category II/III ground-based navigation and lighting requirements. The research has tested EFVS for operations with both Head-Up Displays (HUDs) and "HUD equivalent" Head-Worn Displays (HWDs). The paper describes the EVO concept and representative NASA EFVS research that demonstrate the potential of these technologies to safely conduct operations in visibilities as low as 1000 feet Runway Visual Range (RVR). Future directions are described including efforts to enable low-visibility approach, landing, and roll-outs using EFVS under conditions as low as 300 feet RVR.
Parahippocampal and retrosplenial contributions to human spatial navigation
Epstein, Russell A.
2010-01-01
Spatial navigation is a core cognitive ability in humans and animals. Neuroimaging studies have identified two functionally-defined brain regions that activate during navigational tasks and also during passive viewing of navigationally-relevant stimuli such as environmental scenes: the parahippocampal place area (PPA) and the retrosplenial complex (RSC). Recent findings indicate that the PPA and RSC play distinct and complementary roles in spatial navigation, with the PPA more concerned with representation of the local visual scene and RSC more concerned with situating the scene within the broader spatial environment. These findings are a first step towards understanding the separate components of the cortical network that mediates spatial navigation in humans. PMID:18760955
Sexual Orientation-Related Differences in Virtual Spatial Navigation and Spatial Search Strategies.
Rahman, Qazi; Sharp, Jonathan; McVeigh, Meadhbh; Ho, Man-Ling
2017-07-01
Spatial abilities are generally hypothesized to differ between men and women, and people with different sexual orientations. According to the cross-sex shift hypothesis, gay men are hypothesized to perform in the direction of heterosexual women and lesbian women in the direction of heterosexual men on cognitive tests. This study investigated sexual orientation differences in spatial navigation and strategy during a virtual Morris water maze task (VMWM). Forty-four heterosexual men, 43 heterosexual women, 39 gay men, and 34 lesbian/bisexual women (aged 18-54 years) navigated a desktop VMWM and completed measures of intelligence, handedness, and childhood gender nonconformity (CGN). We quantified spatial learning (hidden platform trials), probe trial performance, and cued navigation (visible platform trials). Spatial strategies during hidden and probe trials were classified into visual scanning, landmark use, thigmotaxis/circling, and enfilading. In general, heterosexual men scored better than women and gay men on some spatial learning and probe trial measures and used more visual scan strategies. However, some differences disappeared after controlling for age and estimated IQ (e.g., in visual scanning heterosexual men differed from women but not gay men). Heterosexual women did not differ from lesbian/bisexual women. For both sexes, visual scanning predicted probe trial performance. More feminine CGN scores were associated with lower performance among men and greater performance among women on specific spatial learning or probe trial measures. These results provide mixed evidence for the cross-sex shift hypothesis of sexual orientation-related differences in spatial cognition.
Rosset, Antoine; Spadola, Luca; Pysher, Lance; Ratib, Osman
2006-01-01
The display and interpretation of images obtained by combining three-dimensional data acquired with two different modalities (eg, positron emission tomography and computed tomography) in the same subject require complex software tools that allow the user to adjust the image parameters. With the current fast imaging systems, it is possible to acquire dynamic images of the beating heart, which add a fourth dimension of visual information-the temporal dimension. Moreover, images acquired at different points during the transit of a contrast agent or during different functional phases add a fifth dimension-functional data. To facilitate real-time image navigation in the resultant large multidimensional image data sets, the authors developed a Digital Imaging and Communications in Medicine-compliant software program. The open-source software, called OsiriX, allows the user to navigate through multidimensional image series while adjusting the blending of images from different modalities, image contrast and intensity, and the rate of cine display of dynamic images. The software is available for free download at http://homepage.mac.com/rossetantoine/osirix. (c) RSNA, 2006.
Evaluating a de-cluttering technique for NextGen RNAV and RNP charts
DOT National Transportation Integrated Search
2012-10-14
The authors propose a de-cluttering technique to simplify the depiction of visually complex Area Navigation (RNAV) and Required Navigation Performance (RNP) procedures by reducing the number of paths shown on a single chart page. An experiment was co...
Navigation in a Virtual Environment Using a Walking Interface
2000-11-01
Fukusima, 1993; Mittelstaedt & Glasauer, 1991; Schmuckler, 1995). Thus, only visual information is available for navigation by dead reckoning ( Gallistel ...Washington DC: National Academy Press. Gallistel , C.R. (1990). The Organization of Learning. Cambridge, MA: MIT Press. lwata, H. & Matsuda, K. (1992). Haptic
A biomimetic vision-based hovercraft accounts for bees' complex behaviour in various corridors.
Roubieu, Frédéric L; Serres, Julien R; Colonnier, Fabien; Franceschini, Nicolas; Viollet, Stéphane; Ruffier, Franck
2014-09-01
Here we present the first systematic comparison between the visual guidance behaviour of a biomimetic robot and those of honeybees flying in similar environments. We built a miniature hovercraft which can travel safely along corridors with various configurations. For the first time, we implemented on a real physical robot the 'lateral optic flow regulation autopilot', which we previously studied computer simulations. This autopilot inspired by the results of experiments on various species of hymenoptera consists of two intertwined feedback loops, the speed and lateral control loops, each of which has its own optic flow (OF) set-point. A heading-lock system makes the robot move straight ahead as fast as 69 cm s(-1) with a clearance from one wall as small as 31 cm, giving an unusually high translational OF value (125° s(-1)). Our biomimetic robot was found to navigate safely along straight, tapered and bent corridors, and to react appropriately to perturbations such as the lack of texture on one wall, the presence of a tapering or non-stationary section of the corridor and even a sloping terrain equivalent to a wind disturbance. The front end of the visual system consists of only two local motion sensors (LMS), one on each side. This minimalistic visual system measuring the lateral OF suffices to control both the robot's forward speed and its clearance from the walls without ever measuring any speeds or distances. We added two additional LMSs oriented at +/-45° to improve the robot's performances in stiffly tapered corridors. The simple control system accounts for worker bees' ability to navigate safely in six challenging environments: straight corridors, single walls, tapered corridors, straight corridors with part of one wall moving or missing, as well as in the presence of wind.
An interactive web application for the dissemination of human systems immunology data.
Speake, Cate; Presnell, Scott; Domico, Kelly; Zeitner, Brad; Bjork, Anna; Anderson, David; Mason, Michael J; Whalen, Elizabeth; Vargas, Olivia; Popov, Dimitry; Rinchai, Darawan; Jourde-Chiche, Noemie; Chiche, Laurent; Quinn, Charlie; Chaussabel, Damien
2015-06-19
Systems immunology approaches have proven invaluable in translational research settings. The current rate at which large-scale datasets are generated presents unique challenges and opportunities. Mining aggregates of these datasets could accelerate the pace of discovery, but new solutions are needed to integrate the heterogeneous data types with the contextual information that is necessary for interpretation. In addition, enabling tools and technologies facilitating investigators' interaction with large-scale datasets must be developed in order to promote insight and foster knowledge discovery. State of the art application programming was employed to develop an interactive web application for browsing and visualizing large and complex datasets. A collection of human immune transcriptome datasets were loaded alongside contextual information about the samples. We provide a resource enabling interactive query and navigation of transcriptome datasets relevant to human immunology research. Detailed information about studies and samples are displayed dynamically; if desired the associated data can be downloaded. Custom interactive visualizations of the data can be shared via email or social media. This application can be used to browse context-rich systems-scale data within and across systems immunology studies. This resource is publicly available online at [Gene Expression Browser Landing Page ( https://gxb.benaroyaresearch.org/dm3/landing.gsp )]. The source code is also available openly [Gene Expression Browser Source Code ( https://github.com/BenaroyaResearch/gxbrowser )]. We have developed a data browsing and visualization application capable of navigating increasingly large and complex datasets generated in the context of immunological studies. This intuitive tool ensures that, whether taken individually or as a whole, such datasets generated at great effort and expense remain interpretable and a ready source of insight for years to come.
Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments
NASA Astrophysics Data System (ADS)
Portalés, Cristina; Lerma, José Luis; Navarro, Santiago
2010-01-01
Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.
Khanna, Ryan; McDevitt, Joseph L; Abecassis, Zachary A; Smith, Zachary A; Koski, Tyler R; Fessler, Richard G; Dahdaleh, Nader S
2016-10-01
Minimally invasive transforaminal lumbar interbody fusion (TLIF) has undergone significant evolution since its conception as a fusion technique to treat lumbar spondylosis. Minimally invasive TLIF is commonly performed using intraoperative two-dimensional fluoroscopic x-rays. However, intraoperative computed tomography (CT)-based navigation during minimally invasive TLIF is gaining popularity for improvements in visualizing anatomy and reducing intraoperative radiation to surgeons and operating room staff. This is the first study to compare clinical outcomes and cost between these 2 imaging techniques during minimally invasive TILF. For comparison, 28 patients who underwent single-level minimally invasive TLIF using fluoroscopy were matched to 28 patients undergoing single-level minimally invasive TLIF using CT navigation based on race, sex, age, smoking status, payer type, and medical comorbidities (Charlson Comorbidity Index). The minimum follow-up time was 6 months. The 2 groups were compared in regard to clinical outcomes and hospital reimbursement from the payer perspective. Average surgery time, anesthesia time, and hospital length of stay were similar for both groups, but average estimated blood loss was lower in the fluoroscopy group compared with the CT navigation group (154 mL vs. 262 mL; P = 0.016). Oswestry Disability Index, back visual analog scale, and leg visual analog scale scores similarly improved in both groups (P > 0.05) at 6-month follow-up. Cost analysis showed that average hospital payments were similar in the fluoroscopy versus the CT navigation groups ($32,347 vs. $32,656; P = 0.925) as well as payments for the operating room (P = 0.868). Single minimally invasive TLIF performed with fluoroscopy versus CT navigation showed similar clinical outcomes and cost at 6 months. Copyright © 2016 Elsevier Inc. All rights reserved.
Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.
Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M
2015-01-01
This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.
Comprehension and Navigation of Networked Hypertexts
ERIC Educational Resources Information Center
Blom, Helen; Segers, Eliane; Knoors, Harry; Hermans, Daan; Verhoeven, Ludo
2018-01-01
This study aims to investigate secondary school students' reading comprehension and navigation of networked hypertexts with and without a graphic overview compared to linear digital texts. Additionally, it was studied whether prior knowledge, vocabulary, verbal, and visual working memory moderated the relation between text design and…
Embodied Interactions in Human-Machine Decision Making for Situation Awareness Enhancement Systems
2016-06-09
characterize differences in spatial navigation strategies in a complex task, the Traveling Salesman Problem (TSP). For the second year, we developed...visual processing, leading to better solutions for spatial optimization problems . I will develop a framework to determine which body expressions best...methods include systematic characterization of gestures during complex problem solving. 15. SUBJECT TERMS Embodied interaction, gestures, one-shot
Kraemer, David J.M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.
2016-01-01
Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In two experiments, participants watched videos of routes through four virtual cities and were subsequently tested on their memory for observed landmarks and on their ability to make judgments regarding the relative directions of the different landmarks along the route. In the first experiment, self-report questionnaires measuring visual and verbal cognitive styles were administered to examine correlations between cognitive styles, landmark recognition, and judgments of relative direction. Results demonstrate a tradeoff in which the verbal cognitive style is more beneficial for recognizing individual landmarks than for judging relative directions between them, whereas the visual cognitive style is more beneficial for judging relative directions than for landmark recognition. In a second experiment, we manipulated the use of verbal and visual strategies by varying task instructions given to separate groups of participants. Results confirm that a verbal strategy benefits landmark memory, whereas a visual strategy benefits judgments of relative direction. The manipulation of strategy by altering task instructions appears to trump individual differences in cognitive style. Taken together, we find that processing different details during route encoding, whether due to individual proclivities (Experiment 1) or task instructions (Experiment 2), results in benefits for different components of navigation relevant information. These findings also highlight the value of considering multiple sources of individual differences as part of spatial cognition investigations. PMID:27668486
Visual control of navigation in insects and its relevance for robotics.
Srinivasan, Mandyam V
2011-08-01
Flying insects display remarkable agility, despite their diminutive eyes and brains. This review describes our growing understanding of how these creatures use visual information to stabilize flight, avoid collisions with objects, regulate flight speed, detect and intercept other flying insects such as mates or prey, navigate to a distant food source, and orchestrate flawless landings. It also outlines the ways in which these insights are now being used to develop novel, biologically inspired strategies for the guidance of autonomous, airborne vehicles. Copyright © 2011 Elsevier Ltd. All rights reserved.
Navigation-supported diagnosis of the substantia nigra by matching midbrain sonography and MRI
NASA Astrophysics Data System (ADS)
Salah, Zein; Weise, David; Preim, Bernhard; Classen, Joseph; Rose, Georg
2012-03-01
Transcranial sonography (TCS) is a well-established neuroimaging technique that allows for visualizing several brainstem structures, including the substantia nigra, and helps for the diagnosis and differential diagnosis of various movement disorders, especially in Parkinsonian syndromes. However, proximate brainstem anatomy can hardly be recognized due to the limited image quality of B-scans. In this paper, a visualization system for the diagnosis of the substantia nigra is presented, which utilizes neuronavigated TCS to reconstruct tomographical slices from registered MRI datasets and visualizes them simultaneously with corresponding TCS planes in realtime. To generate MRI tomographical slices, the tracking data of the calibrated ultrasound probe are passed to an optimized slicing algorithm, which computes cross sections at arbitrary positions and orientations from the registered MRI dataset. The extracted MRI cross sections are finally fused with the region of interest from the ultrasound image. The system allows for the computation and visualization of slices at a near real-time rate. Primary tests of the system show an added value to the pure sonographic imaging. The system also allows for reconstructing volumetric (3D) ultrasonic data of the region of interest, and thus contributes to enhancing the diagnostic yield of midbrain sonography.
Baumann, Martin; Keinath, Andreas; Krems, Josef F; Bengler, Klaus
2004-05-01
Despite the usefulness of new on-board information systems one has to be concerned about the potential distraction effects that they impose on the driver. Therefore, methods and procedures are necessary to assess the visual demand that is connected to the usage of an on-board system. The occlusion-method is considered a strong candidate as a procedure for evaluating display designs with regard to their visual demand. This paper reports results from two experimental studies conducted to further evaluate this method. In the first study, performance in using an in-car navigation system was measured under three conditions: static (parking lot), occlusion (shutter glasses), and driving. The results show that the occlusion-procedure can be used to simulate visual requirements of real traffic conditions. In a second study the occlusion method was compared to a global evaluation criterion based on the total task time. It can be demonstrated that the occlusion method can identify tasks which meet this criterion, but are yet irresolvable under driving conditions. It is concluded that the occlusion technique seems to be a reliable and valid method for evaluating visual and dialogue aspects of in-car information systems.
Image and information management system
NASA Technical Reports Server (NTRS)
Robertson, Tina L. (Inventor); Raney, Michael C. (Inventor); Dougherty, Dennis M. (Inventor); Kent, Peter C. (Inventor); Brucker, Russell X. (Inventor); Lampert, Daryl A. (Inventor)
2009-01-01
A system and methods through which pictorial views of an object's configuration, arranged in a hierarchical fashion, are navigated by a person to establish a visual context within the configuration. The visual context is automatically translated by the system into a set of search parameters driving retrieval of structured data and content (images, documents, multimedia, etc.) associated with the specific context. The system places ''hot spots'', or actionable regions, on various portions of the pictorials representing the object. When a user interacts with an actionable region, a more detailed pictorial from the hierarchy is presented representing that portion of the object, along with real-time feedback in the form of a popup pane containing information about that region, and counts-by-type reflecting the number of items that are available within the system associated with the specific context and search filters established at that point in time.
Image and information management system
NASA Technical Reports Server (NTRS)
Robertson, Tina L. (Inventor); Kent, Peter C. (Inventor); Raney, Michael C. (Inventor); Dougherty, Dennis M. (Inventor); Brucker, Russell X. (Inventor); Lampert, Daryl A. (Inventor)
2007-01-01
A system and methods through which pictorial views of an object's configuration, arranged in a hierarchical fashion, are navigated by a person to establish a visual context within the configuration. The visual context is automatically translated by the system into a set of search parameters driving retrieval of structured data and content (images, documents, multimedia, etc.) associated with the specific context. The system places hot spots, or actionable regions, on various portions of the pictorials representing the object. When a user interacts with an actionable region, a more detailed pictorial from the hierarchy is presented representing that portion of the object, along with real-time feedback in the form of a popup pane containing information about that region, and counts-by-type reflecting the number of items that are available within the system associated with the specific context and search filters established at that point in time.
Method and system for providing autonomous control of a platform
NASA Technical Reports Server (NTRS)
Seelinger, Michael J. (Inventor); Yoder, John-David (Inventor)
2012-01-01
The present application provides a system for enabling instrument placement from distances on the order of five meters, for example, and increases accuracy of the instrument placement relative to visually-specified targets. The system provides precision control of a mobile base of a rover and onboard manipulators (e.g., robotic arms) relative to a visually-specified target using one or more sets of cameras. The system automatically compensates for wheel slippage and kinematic inaccuracy ensuring accurate placement (on the order of 2 mm, for example) of the instrument relative to the target. The system provides the ability for autonomous instrument placement by controlling both the base of the rover and the onboard manipulator using a single set of cameras. To extend the distance from which the placement can be completed to nearly five meters, target information may be transferred from navigation cameras (used for long-range) to front hazard cameras (used for positioning the manipulator).
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory’s considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm’s performance and ability to process ‘flight-like’ imagery formats with a ‘flight-like’ trajectory, positioning ourselves to easily process flight data from the upcoming ‘ISS Selfie’ activity and then compare the algorithm’s quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system.Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory's considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm's performance and ability to process 'flight-like' imagery formats with a 'flight-like' trajectory, positioning ourselves to easily process flight data from the upcoming 'ISS Selfie' activity and then compare the algorithm's quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system. Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Bio-inspired display of polarization information using selected visual cues
NASA Astrophysics Data System (ADS)
Yemelyanov, Konstantin M.; Lin, Shih-Schon; Luis, William Q.; Pugh, Edward N., Jr.; Engheta, Nader
2003-12-01
For imaging systems the polarization of electromagnetic waves carries much potentially useful information about such features of the world as the surface shape, material contents, local curvature of objects, as well as about the relative locations of the source, object and imaging system. The imaging system of the human eye however, is "polarization-blind", and cannot utilize the polarization of light without the aid of an artificial, polarization-sensitive instrument. Therefore, polarization information captured by a man-made polarimetric imaging system must be displayed to a human observer in the form of visual cues that are naturally processed by the human visual system, while essentially preserving the other important non-polarization information (such as spectral and intensity information) in an image. In other words, some forms of sensory substitution are needed for representing polarization "signals" without affecting other visual information such as color and brightness. We are investigating several bio-inspired representational methodologies for mapping polarization information into visual cues readily perceived by the human visual system, and determining which mappings are most suitable for specific applications such as object detection, navigation, sensing, scene classifications, and surface deformation. The visual cues and strategies we are exploring are the use of coherently moving dots superimposed on image to represent various range of polarization signals, overlaying textures with spatial and/or temporal signatures to segregate regions of image with differing polarization, modulating luminance and/or color contrast of scenes in terms of certain aspects of polarization values, and fusing polarization images into intensity-only images. In this talk, we will present samples of our findings in this area.
Rationale and description of a coordinated cockpit display for aircraft flight management
NASA Technical Reports Server (NTRS)
Baty, D. L.
1976-01-01
The design for aircraft cockpit display systems is discussed in detail. The system consists of a set of three beam penetration color cathode ray tubes (CRT). One of three orthogonal projects of the aircraft's state appears on each CRT which displays different views of the same information. The color feature is included to obtain visual separation of information elements. The colors of red, green and yellow are used to differentiate control, performance and navigation information. Displays are coordinated in information and color.
Lossnitzer, Dirk; Seitz, Sebastian A; Krautz, Birgit; Schnackenburg, Bernhard; André, Florian; Korosoglou, Grigorios; Katus, Hugo A; Steen, Henning
2015-07-26
To investigate if magnetic resonance (MR)-guided biopsy can improve the performance and safety of such procedures. A novel MR-compatible bioptome was evaluated in a series of in-vitro experiments in a 1.5T magnetic resonance imaging (MRI) system. The bioptome was inserted into explanted porcine and bovine hearts under real-time MR-guidance employing a steady state free precession sequence. The artifact produced by the metal element at the tip and the signal voids caused by the bioptome were visually tracked for navigation and allowed its constant and precise localization. Cardiac structural elements and the target regions for the biopsy were clearly visible. Our method allowed a significantly better spatial visualization of the bioptoms tip compared to conventional X-ray guidance. The specific device design of the bioptome avoided inducible currents and therefore subsequent heating. The novel MR-compatible bioptome provided a superior cardiovascular magnetic resonance (imaging) soft-tissue visualization for MR-guided myocardial biopsies. Not at least the use of MRI guidance for endomyocardial biopsies completely avoided radiation exposure for both patients and interventionalists. MRI-guided endomyocardial biopsies provide a better than conventional X-ray guided navigation and could therefore improve the specificity and reproducibility of cardiac biopsies in future studies.
The Influence of Individual Differences on Diagrammatic Communication and Problem Representation
ERIC Educational Resources Information Center
King, Laurel A.
2009-01-01
Understanding the user and customizing the interface to augment cognition and usability are goals of human computer interaction research and design. Yet, little is known about the influence of individual visual-verbal information presentation preferences on visual navigation and screen element usage. If consistent differences in visual navigation…
Kim, Huhn; Song, Haewon
2014-05-01
Nowadays, many automobile manufacturers are interested in applying the touch gestures that are used in smart phones to operate their in-vehicle information systems (IVISs). In this study, an experiment was performed to verify the applicability of touch gestures in the operation of IVISs from the viewpoints of both driving safety and usability. In the experiment, two devices were used: one was the Apple iPad, with which various touch gestures such as flicking, panning, and pinching were enabled; the other was the SK EnNavi, which only allowed tapping touch gestures. The participants performed the touch operations using the two devices under visually occluded situations, which is a well-known technique for estimating load of visual attention while driving. In scrolling through a list, the flicking gestures required more time than the tapping gestures. Interestingly, both the flicking and simple tapping gestures required slightly higher visual attention. In moving a map, the average time taken per operation and the visual attention load required for the panning gestures did not differ from those of the simple tapping gestures that are used in existing car navigation systems. In zooming in/out of a map, the average time taken per pinching gesture was similar to that of the tapping gesture but required higher visual attention. Moreover, pinching gestures at a display angle of 75° required that the participants severely bend their wrists. Because the display angles of many car navigation systems tends to be more than 75°, pinching gestures can cause severe fatigue on users' wrists. Furthermore, contrary to participants' evaluation of other gestures, several participants answered that the pinching gesture was not necessary when operating IVISs. It was found that the panning gesture is the only touch gesture that can be used without negative consequences when operating IVISs while driving. The flicking gesture is likely to be used if the screen moving speed is slower or if the car is in heavy traffic. However, the pinching gesture is not an appropriate method of operating IVISs while driving in the various scenarios examined in this study. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David
2017-11-01
Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.
Immersive visualization for navigation and control of the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Hartman, Frank R.; Cooper, Brian; Maxwell, Scott; Wright, John; Yen, Jeng
2004-01-01
The Rover Sequencing and Visualization Program (RSVP) is a suite of tools for sequencing of planetary rovers, which are subject to significant light time delay and thus are unsuitable for teleoperation.
Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas
2018-01-01
This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites. PMID:29673230
Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas.
Gakne, Paul Verlaine; O'Keefe, Kyle
2018-04-17
This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites.
Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie
2002-01-01
Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.
Satarasinghe, Praveen; Hamilton, Kojo D; Tarver, Michael J; Buchanan, Robert J; Koltz, Michael T
2018-04-17
Utilization of pedicle screws (PS) for spine stabilization is common in spinal surgery. With reliance on visual inspection of anatomical landmarks prior to screw placement, the free-hand technique requires a high level of surgeon skill and precision. Three-dimensional (3D), computer-assisted virtual neuronavigation improves the precision of PS placement and minimization steps. Twenty-three patients with degenerative, traumatic, or neoplastic pathologies received treatment via a novel three-step PS technique that utilizes a navigated power driver in combination with virtual screw technology. (1) Following visualization of neuroanatomy using intraoperative CT, a navigated 3-mm match stick drill bit was inserted at an anatomical entry point with a screen projection showing a virtual screw. (2) A Navigated Stryker Cordless Driver with an appropriate tap was used to access the vertebral body through a pedicle with a screen projection again showing a virtual screw. (3) A Navigated Stryker Cordless Driver with an actual screw was used with a screen projection showing the same virtual screw. One hundred and forty-four consecutive screws were inserted using this three-step, navigated driver, virtual screw technique. Only 1 screw needed intraoperative revision after insertion using the three-step, navigated driver, virtual PS technique. This amounts to a 0.69% revision rate. One hundred percent of patients had intraoperative CT reconstructed images taken to confirm hardware placement. Pedicle screw placement utilizing the Stryker-Ziehm neuronavigation virtual screw technology with a three step, navigated power drill technique is safe and effective.
Diller, Kyle I; Bayden, Alexander S; Audie, Joseph; Diller, David J
2018-01-01
There is growing interest in peptide-based drug design and discovery. Due to their relatively large size, polymeric nature, and chemical complexity, the design of peptide-based drugs presents an interesting "big data" challenge. Here, we describe an interactive computational environment, PeptideNavigator, for naturally exploring the tremendous amount of information generated during a peptide drug design project. The purpose of PeptideNavigator is the presentation of large and complex experimental and computational data sets, particularly 3D data, so as to enable multidisciplinary scientists to make optimal decisions during a peptide drug discovery project. PeptideNavigator provides users with numerous viewing options, such as scatter plots, sequence views, and sequence frequency diagrams. These views allow for the collective visualization and exploration of many peptides and their properties, ultimately enabling the user to focus on a small number of peptides of interest. To drill down into the details of individual peptides, PeptideNavigator provides users with a Ramachandran plot viewer and a fully featured 3D visualization tool. Each view is linked, allowing the user to seamlessly navigate from collective views of large peptide data sets to the details of individual peptides with promising property profiles. Two case studies, based on MHC-1A activating peptides and MDM2 scaffold design, are presented to demonstrate the utility of PeptideNavigator in the context of disparate peptide-design projects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-15
... Devices, Navigation and Display Systems, Radar Systems, Navigational Aids, Mapping Systems and Related... navigation products, including GPS devices, navigation and display systems, radar systems, navigational aids..., radar systems, navigational aids, mapping systems and related software by reason of infringement of one...
Adaptation to Variance of Stimuli in Drosophila Larva Navigation
NASA Astrophysics Data System (ADS)
Wolk, Jason; Gepner, Ruben; Gershow, Marc
In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.
Analysis of Ten Reverse Engineering Tools
NASA Astrophysics Data System (ADS)
Koskinen, Jussi; Lehmonen, Tero
Reverse engineering tools can be used in satisfying the information needs of software maintainers. Especially in case of maintaining large-scale legacy systems tool support is essential. Reverse engineering tools provide various kinds of capabilities to provide the needed information to the tool user. In this paper we analyze the provided capabilities in terms of four aspects: provided data structures, visualization mechanisms, information request specification mechanisms, and navigation features. We provide a compact analysis of ten representative reverse engineering tools for supporting C, C++ or Java: Eclipse Java Development Tools, Wind River Workbench (for C and C++), Understand (for C++), Imagix 4D, Creole, Javadoc, Javasrc, Source Navigator, Doxygen, and HyperSoft. The results of the study supplement the earlier findings in this important area.
Yang, Chi-Lin; Yang, Been-Der; Lin, Mu-Lien; Wang, Yao-Hung; Wang, Jaw-Lin
2010-10-01
Development of a patient-mount navigated intervention (PaMNI) system for spinal diseases. An in vivo clinical human trial was conducted to validate this system. To verify the feasibility of the PaMNI system with the clinical trial on percutaneous pulsed radiofrequency stimulation of dorsal root ganglion (PRF-DRG). Two major image guiding techniques, i.e., computed tomography (CT)-guided and fluoro-guided, were used for spinal intervention. The CT-guided technique provides high spatial resolution, and is claimed to be more accurate than the fluoro-guided technique. Nevertheless, the CT-guided intervention usually reaches higher radiograph exposure than the fluoro-guided counterpart. Some navigated intervention systems were developed to reduce the radiation of CT-guided intervention. Nevertheless, these systems were not popularly used due to the longer operation time, a new protocol for surgeons, and the availability of such a system. The PaMNI system includes 3 components, i.e., a patient-mount miniature tracking unit, an auto-registered reference frame unit, and a user-friendly image processing unit. The PRF-DRG treatment was conducted to find the clinical feasibility of this system. The in vivo clinical trial showed that the accuracy, visual analog scale evaluation after surgery, and radiograph exposure of the PaMNI-guided technique are comparable to the one of conventional fluoro-guided technique, while the operation time is increased by 5 minutes. Combining the virtues of fluoroscopy and CT-guided techniques, our navigation system is operated like a virtual fluoroscopy with augmented CT images. This system elevates the performance of CT-guided intervention and reduces surgeons' radiation exposure risk to a minimum, while keeping low radiation dose to patients like its fluoro-guided counterpart. The clinical trial of PRF-DRG treatment showed the clinical feasibility and efficacy of this system.
Mabray, Marc C; Datta, Sanjit; Lillaney, Prasheel V; Moore, Teri; Gehrisch, Sonja; Talbott, Jason F; Levitt, Michael R; Ghodke, Basavaraj V; Larson, Paul S; Cooke, Daniel L
2016-07-01
Fluoroscopic systems in modern interventional suites have the ability to perform flat panel detector CT (FDCT) with navigational guidance. Fusion with MR allows navigational guidance towards FDCT occult targets. We aim to evaluate the accuracy of this system using single-pass needle placement in a deep brain stimulation (DBS) phantom. MR was performed on a head phantom with DBS lead targets. The head phantom was placed into fixation and FDCT was performed. FDCT and MR datasets were automatically fused using the integrated guidance system (iGuide, Siemens). A DBS target was selected on the MR dataset. A 10 cm, 19 G needle was advanced by hand in a single pass using laser crosshair guidance. Radial error was visually assessed against measurement markers on the target and by a second FDCT. Ten needles were placed using CT-MR fusion and 10 needles were placed without MR fusion, with targeting based solely on FDCT and fusion steps repeated for every pass. Mean radial error was 2.75±1.39 mm as defined by visual assessment to the centre of the DBS target and 2.80±1.43 mm as defined by FDCT to the centre of the selected target point. There were no statistically significant differences in error between MR fusion and non-MR guided series. Single pass needle placement in a DBS phantom using FDCT guidance is associated with a radial error of approximately 2.5-3.0 mm at a depth of approximately 80 mm. This system could accurately target sub-centimetre intracranial lesions defined on MR. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Zheng, Li; Yi, Ruan
2009-11-01
Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.
Preliminary development of augmented reality systems for spinal surgery
NASA Astrophysics Data System (ADS)
Nguyen, Nhu Q.; Ramjist, Joel M.; Jivraj, Jamil; Jakubovic, Raphael; Deorajh, Ryan; Yang, Victor X. D.
2017-02-01
Surgical navigation has been more actively deployed in open spinal surgeries due to the need for improved precision during procedures. This is increasingly difficult in minimally invasive surgeries due to the lack of visual cues caused by smaller exposure sites, and increases a surgeon's dependence on their knowledge of anatomical landmarks as well as the CT or MRI images. The use of augmented reality (AR) systems and registration technologies in spinal surgeries could allow for improvements to techniques by overlaying a 3D reconstruction of patient anatomy in the surgeon's field of view, creating a mixed reality visualization. The AR system will be capable of projecting the 3D reconstruction onto a field and preliminary object tracking on a phantom. Dimensional accuracy of the mixed media will also be quantified to account for distortions in tracking.
Chakraborty, Shamik; Lall, Rohan; Fanous, Andrew A; Boockvar, John; Langer, David J
2017-01-01
The surgical management of deep brain tumors is often challenging due to the limitations of stereotactic needle biopsies and the morbidity associated with transcortical approaches. We present a novel microscopic navigational technique utilizing the Viewsite Brain Access System (VBAS) (Vycor Medical, Boca Raton, FL, USA) for resection of a deep parietal periventricular high-grade glioma as well as another glioma and a cavernoma with no related morbidity. The approach utilized a navigational tracker mounted on a microscope, which was set to the desired trajectory and depth. It allowed gentle continuous insertion of the VBAS directly to a deep lesion under continuous microscopic visualization, increasing safety by obviating the need to look up from the microscope and thus avoiding loss of trajectory. This technique has broad value for the resection of a variety of deep brain lesions. PMID:28331774
White, Tim; Chakraborty, Shamik; Lall, Rohan; Fanous, Andrew A; Boockvar, John; Langer, David J
2017-02-04
The surgical management of deep brain tumors is often challenging due to the limitations of stereotactic needle biopsies and the morbidity associated with transcortical approaches. We present a novel microscopic navigational technique utilizing the Viewsite Brain Access System (VBAS) (Vycor Medical, Boca Raton, FL, USA) for resection of a deep parietal periventricular high-grade glioma as well as another glioma and a cavernoma with no related morbidity. The approach utilized a navigational tracker mounted on a microscope, which was set to the desired trajectory and depth. It allowed gentle continuous insertion of the VBAS directly to a deep lesion under continuous microscopic visualization, increasing safety by obviating the need to look up from the microscope and thus avoiding loss of trajectory. This technique has broad value for the resection of a variety of deep brain lesions.
López, David; Oehlberg, Lora; Doger, Candemir; Isenberg, Tobias
2016-05-01
We discuss touch-based navigation of 3D visualizations in a combined monoscopic and stereoscopic viewing environment. We identify a set of interaction modes, and a workflow that helps users transition between these modes to improve their interaction experience. In our discussion we analyze, in particular, the control-display space mapping between the different reference frames of the stereoscopic and monoscopic displays. We show how this mapping supports interactive data exploration, but may also lead to conflicts between the stereoscopic and monoscopic views due to users' movement in space; we resolve these problems through synchronization. To support our discussion, we present results from an exploratory observational evaluation with domain experts in fluid mechanics and structural biology. These experts explored domain-specific datasets using variations of a system that embodies the interaction modes and workflows; we report on their interactions and qualitative feedback on the system and its workflow.
Sensor-Based Electromagnetic Navigation (Mediguide®): How Accurate Is It? A Phantom Model Study.
Bourier, Felix; Reents, Tilko; Ammar-Busch, Sonia; Buiatti, Alessandra; Grebmer, Christian; Telishevska, Marta; Brkic, Amir; Semmler, Verena; Lennerz, Carsten; Kaess, Bernhard; Kottmaier, Marc; Kolb, Christof; Deisenhofer, Isabel; Hessling, Gabriele
2015-10-01
Data about localization reproducibility as well as spatial and visual accuracy of the new MediGuide® sensor-based electroanatomic navigation technology are scarce. We therefore sought to quantify these parameters based on phantom experiments. A realistic heart phantom was generated in a 3D-Printer. A CT scan was performed on the phantom. The phantom itself served as ground-truth reference to ensure exact and reproducible catheter placement. A MediGuide® catheter was repeatedly tagged at selected positions to assess accuracy of point localization. The catheter was also used to acquire a MediGuide®-scaled geometry in the EnSite Velocity® electroanatomic mapping system. The acquired geometries (MediGuide®-scaled and EnSite Velocity®-scaled) were compared to a CT segmentation of the phantom to quantify concordance. Distances between landmarks were measured in the EnSite Velocity®- and MediGuide®-scaled geometry and the CT dataset for Bland-Altman comparison. The visualization of virtual MediGuide® catheter tips was compared to their corresponding representation on fluoroscopic cine-loops. Point localization accuracy was 0.5 ± 0.3 mm for MediGuide® and 1.4 ± 0.7 mm for EnSite Velocity®. The 3D accuracy of the geometries was 1.1 ± 1.4 mm (MediGuide®-scaled) and 3.2 ± 1.6 mm (not MediGuide®-scaled). The offset between virtual MediGuide® catheter visualization and catheter representation on corresponding fluoroscopic cine-loops was 0.4 ± 0.1 mm. The MediGuide® system shows a very high level of accuracy regarding localization reproducibility as well as spatial and visual accuracy, which can be ascribed to the magnetic field localization technology. The observed offsets between the geometry visualization and the real phantom are below a clinically relevant threshold. © 2015 Wiley Periodicals, Inc.
General Aviation Flight Test of Advanced Operations Enabled by Synthetic Vision
NASA Technical Reports Server (NTRS)
Glaab, Louis J.; Hughhes, Monica F.; Parrish, Russell V.; Takallu, Mohammad A.
2014-01-01
A flight test was performed to compare the use of three advanced primary flight and navigation display concepts to a baseline, round-dial concept to assess the potential for advanced operations. The displays were evaluated during visual and instrument approach procedures including an advanced instrument approach resembling a visual airport traffic pattern. Nineteen pilots from three pilot groups, reflecting the diverse piloting skills of the General Aviation pilot population, served as evaluation subjects. The experiment had two thrusts: 1) an examination of the capabilities of low-time (i.e., <400 hours), non-instrument-rated pilots to perform nominal instrument approaches, and 2) an exploration of potential advanced Visual Meteorological Conditions (VMC)-like approaches in Instrument Meteorological Conditions (IMC). Within this context, advanced display concepts are considered to include integrated navigation and primary flight displays with either aircraft attitude flight directors or Highway In The Sky (HITS) guidance with and without a synthetic depiction of the external visuals (i.e., synthetic vision). Relative to the first thrust, the results indicate that using an advanced display concept, as tested herein, low-time, non-instrument-rated pilots can exhibit flight-technical performance, subjective workload and situation awareness ratings as good as or better than high-time Instrument Flight Rules (IFR)-rated pilots using Baseline Round Dials for a nominal IMC approach. For the second thrust, the results indicate advanced VMC-like approaches are feasible in IMC, for all pilot groups tested for only the Synthetic Vision System (SVS) advanced display concept.
Rover-based visual target tracking validation and mission infusion
NASA Technical Reports Server (NTRS)
Kim, Won S.; Steele, Robert D.; Ansar, Adnan I.; Ali, Khaled; Nesnas, Issa
2005-01-01
The Mars Exploration Rovers (MER'03), Spirit and Opportunity, represent the state of the art in rover operations on Mars. This paper presents validation experiments of different visual tracking algorithms using the rover's navigation camera.
Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA
2008-05-13
A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.
Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA
2012-03-06
A method of displaying correlations among information objects includes receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.
The 727 approach energy management system avionics specification (preliminary)
NASA Technical Reports Server (NTRS)
Jackson, D. O.; Lambregts, A. A.
1976-01-01
Hardware and software requirements for an Approach Energy Management System (AEMS) consisting of an airborne digital computer and cockpit displays are presented. The displays provide the pilot with a visual indication of when to manually operate the gear, flaps, and throttles during a delayed flap approach so as to reduce approach time, fuel consumption, and community noise. The AEMS is an independent system that does not interact with other navigation or control systems, and is compatible with manually flown or autopilot coupled approaches. Operational use of the AEMS requires a DME ground station colocated with the flight path reference.
Display technology - Human factors concepts
NASA Astrophysics Data System (ADS)
Stokes, Alan; Wickens, Christopher; Kite, Kirsten
1990-03-01
Recent advances in the design of aircraft cockpit displays are reviewed, with an emphasis on their applicability to automobiles. The fundamental principles of display technology are introduced, and individual chapters are devoted to selective visual attention, command and status displays, foveal and peripheral displays, navigational displays, auditory displays, color and pictorial displays, head-up displays, automated systems, and dual-task performance and pilot workload. Diagrams, drawings, and photographs of typical displays are provided.
Integration of Kinect and Low-Cost Gnss for Outdoor Navigation
NASA Astrophysics Data System (ADS)
Pagliaria, D.; Pinto, L.; Reguzzoni, M.; Rossi, L.
2016-06-01
Since its launch on the market, Microsoft Kinect sensor has represented a great revolution in the field of low cost navigation, especially for indoor robotic applications. In fact, this system is endowed with a depth camera, as well as a visual RGB camera, at a cost of about 200. The characteristics and the potentiality of the Kinect sensor have been widely studied for indoor applications. The second generation of this sensor has been announced to be capable of acquiring data even outdoors, under direct sunlight. The task of navigating passing from an indoor to an outdoor environment (and vice versa) is very demanding because the sensors that work properly in one environment are typically unsuitable in the other one. In this sense the Kinect could represent an interesting device allowing bridging the navigation solution between outdoor and indoor. In this work the accuracy and the field of application of the new generation of Kinect sensor have been tested outdoor, considering different lighting conditions and the reflective properties of the emitted ray on different materials. Moreover, an integrated system with a low cost GNSS receiver has been studied, with the aim of taking advantage of the GNSS positioning when the satellite visibility conditions are good enough. A kinematic test has been performed outdoor by using a Kinect sensor and a GNSS receiver and it is here presented.
Ros, Ivo G; Bhagavatula, Partha S; Lin, Huai-Ti; Biewener, Andrew A
2017-02-06
Flying animals must successfully contend with obstacles in their natural environments. Inspired by the robust manoeuvring abilities of flying animals, unmanned aerial systems are being developed and tested to improve flight control through cluttered environments. We previously examined steering strategies that pigeons adopt to fly through an array of vertical obstacles (VOs). Modelling VO flight guidance revealed that pigeons steer towards larger visual gaps when making fast steering decisions. In the present experiments, we recorded three-dimensional flight kinematics of pigeons as they flew through randomized arrays of horizontal obstacles (HOs). We found that pigeons still decelerated upon approach but flew faster through a denser array of HOs compared with the VO array previously tested. Pigeons exhibited limited steering and chose gaps between obstacles most aligned to their immediate flight direction, in contrast to VO navigation that favoured widest gap steering. In addition, pigeons navigated past the HOs with more variable and decreased wing stroke span and adjusted their wing stroke plane to reduce contact with the obstacles. Variability in wing extension, stroke plane and wing stroke path was greater during HO flight. Pigeons also exhibited pronounced head movements when negotiating HOs, which potentially serve a visual function. These head-bobbing-like movements were most pronounced in the horizontal (flight direction) and vertical directions, consistent with engaging motion vision mechanisms for obstacle detection. These results show that pigeons exhibit a keen kinesthetic sense of their body and wings in relation to obstacles. Together with aerodynamic flapping flight mechanics that favours vertical manoeuvring, pigeons are able to navigate HOs using simple rules, with remarkable success.
Ros, Ivo G.; Bhagavatula, Partha S.; Lin, Huai-Ti
2017-01-01
Flying animals must successfully contend with obstacles in their natural environments. Inspired by the robust manoeuvring abilities of flying animals, unmanned aerial systems are being developed and tested to improve flight control through cluttered environments. We previously examined steering strategies that pigeons adopt to fly through an array of vertical obstacles (VOs). Modelling VO flight guidance revealed that pigeons steer towards larger visual gaps when making fast steering decisions. In the present experiments, we recorded three-dimensional flight kinematics of pigeons as they flew through randomized arrays of horizontal obstacles (HOs). We found that pigeons still decelerated upon approach but flew faster through a denser array of HOs compared with the VO array previously tested. Pigeons exhibited limited steering and chose gaps between obstacles most aligned to their immediate flight direction, in contrast to VO navigation that favoured widest gap steering. In addition, pigeons navigated past the HOs with more variable and decreased wing stroke span and adjusted their wing stroke plane to reduce contact with the obstacles. Variability in wing extension, stroke plane and wing stroke path was greater during HO flight. Pigeons also exhibited pronounced head movements when negotiating HOs, which potentially serve a visual function. These head-bobbing-like movements were most pronounced in the horizontal (flight direction) and vertical directions, consistent with engaging motion vision mechanisms for obstacle detection. These results show that pigeons exhibit a keen kinesthetic sense of their body and wings in relation to obstacles. Together with aerodynamic flapping flight mechanics that favours vertical manoeuvring, pigeons are able to navigate HOs using simple rules, with remarkable success. PMID:28163883
Fingerprints selection for topological localization
NASA Astrophysics Data System (ADS)
Popov, Vladimir
2017-07-01
Problems of visual navigation are extensively studied in contemporary robotics. In particular, we can mention different problems of visual landmarks selection, the problem of selection of a minimal set of visual landmarks, selection of partially distinguishable guards, the problem of placement of visual landmarks. In this paper, we consider one-dimensional color panoramas. Such panoramas can be used for creating fingerprints. Fingerprints give us unique identifiers for visually distinct locations by recovering statistically significant features. Fingerprints can be used as visual landmarks for the solution of various problems of mobile robot navigation. In this paper, we consider a method for automatic generation of fingerprints. In particular, we consider the bounded Post correspondence problem and applications of the problem to consensus fingerprints and topological localization. We propose an efficient approach to solve the bounded Post correspondence problem. In particular, we use an explicit reduction from the decision version of the problem to the satisfiability problem. We present the results of computational experiments for different satisfiability algorithms. In robotic experiments, we consider the average accuracy of reaching of the target point for different lengths of routes and types of fingerprints.
Using a 'value-added' approach for contextual design of geographic information.
May, Andrew J
2013-11-01
The aim of this article is to demonstrate how a 'value-added' approach can be used for user-centred design of geographic information. An information science perspective was used, with value being the difference in outcomes arising from alternative information sets. Sixteen drivers navigated a complex, unfamiliar urban route, using visual and verbal instructions representing the distance-to-turn and junction layout information presented by typical satellite navigation systems. Data measuring driving errors, navigation errors and driver confidence were collected throughout the trial. The results show how driver performance varied considerably according to the geographic context at specific locations, and that there are specific opportunities to add value with enhanced geographical information. The conclusions are that a value-added approach facilitates a more explicit focus on 'desired' (and feasible) levels of end user performance with different information sets, and is a potentially effective approach to user-centred design of geographic information. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Compensation for Unconstrained Catheter Shaft Motion in Cardiac Catheters
Degirmenci, Alperen; Loschak, Paul M.; Tschabrunn, Cory M.; Anter, Elad; Howe, Robert D.
2016-01-01
Cardiac catheterization with ultrasound (US) imaging catheters provides real time US imaging from within the heart, but manually navigating a four degree of freedom (DOF) imaging catheter is difficult and requires extensive training. Existing work has demonstrated robotic catheter steering in constrained bench top environments. Closed-loop control in an unconstrained setting, such as patient vasculature, remains a significant challenge due to friction, backlash, and physiological disturbances. In this paper we present a new method for closed-loop control of the catheter tip that can accurately and robustly steer 4-DOF cardiac catheters and other flexible manipulators despite these effects. The performance of the system is demonstrated in a vasculature phantom and an in vivo porcine animal model. During bench top studies the robotic system converged to the desired US imager pose with sub-millimeter and sub-degree-level accuracy. During animal trials the system achieved 2.0 mm and 0.65° accuracy. Accurate and robust robotic navigation of flexible manipulators will enable enhanced visualization and treatment during procedures. PMID:27525170
PRoViScout: a planetary scouting rover demonstrator
NASA Astrophysics Data System (ADS)
Paar, Gerhard; Woods, Mark; Gimkiewicz, Christiane; Labrosse, Frédéric; Medina, Alberto; Tyler, Laurence; Barnes, David P.; Fritz, Gerald; Kapellos, Konstantinos
2012-01-01
Mobile systems exploring Planetary surfaces in future will require more autonomy than today. The EU FP7-SPACE Project ProViScout (2010-2012) establishes the building blocks of such autonomous exploration systems in terms of robotics vision by a decision-based combination of navigation and scientific target selection, and integrates them into a framework ready for and exposed to field demonstration. The PRoViScout on-board system consists of mission management components such as an Executive, a Mars Mission On-Board Planner and Scheduler, a Science Assessment Module, and Navigation & Vision Processing modules. The platform hardware consists of the rover with the sensors and pointing devices. We report on the major building blocks and their functions & interfaces, emphasizing on the computer vision parts such as image acquisition (using a novel zoomed 3D-Time-of-Flight & RGB camera), mapping from 3D-TOF data, panoramic image & stereo reconstruction, hazard and slope maps, visual odometry and the recognition of potential scientifically interesting targets.
Accuracy of image-guided surgical navigation using near infrared (NIR) optical tracking
NASA Astrophysics Data System (ADS)
Jakubovic, Raphael; Farooq, Hamza; Alarcon, Joseph; Yang, Victor X. D.
2015-03-01
Spinal surgery is particularly challenging for surgeons, requiring a high level of expertise and precision without being able to see beyond the surface of the bone. Accurate insertion of pedicle screws is critical considering perforation of the pedicle can result in profound clinical consequences including spinal cord, nerve root, arterial injury, neurological deficits, chronic pain, and/or failed back syndrome. Various navigation systems have been designed to guide pedicle screw fixation. Computed tomography (CT)-based image guided navigation systems increase the accuracy of screw placement allowing for 3- dimensional visualization of the spinal anatomy. Current localization techniques require extensive preparation and introduce spatial deviations. Use of near infrared (NIR) optical tracking allows for realtime navigation of the surgery by utilizing spectral domain multiplexing of light, greatly enhancing the surgeon's situation awareness in the operating room. While the incidence of pedicle screw perforation and complications have been significantly reduced with the introduction of modern navigational technologies, some error exists. Several parameters have been suggested including fiducial localization and registration error, target registration error, and angular deviation. However, many of these techniques quantify error using the pre-operative CT and an intra-operative screenshot without assessing the true screw trajectory. In this study we quantified in-vivo error by comparing the true screw trajectory to the intra-operative trajectory. Pre- and post- operative CT as well as intra-operative screenshots were obtained for a cohort of patients undergoing spinal surgery. We quantified entry point error and angular deviation in the axial and sagittal planes.
Visual Requirements for Human Drivers and Autonomous Vehicles
DOT National Transportation Integrated Search
2016-03-01
Identification of published literature between 1995 and 2013, focusing on determining the quantity and quality of visual information needed under both driving modes (i.e., human and autonomous) to navigate the road safely, especially as it pertains t...
Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission
NASA Technical Reports Server (NTRS)
Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.
2004-01-01
In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.
Gan, Qi; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Hu, Chuanzhen; Shao, Pengfei; Xu, Ronald X.
2016-01-01
We propose a projective navigation system for fluorescence imaging and image display in a natural mode of visual perception. The system consists of an excitation light source, a monochromatic charge coupled device (CCD) camera, a host computer, a projector, a proximity sensor and a Complementary metal–oxide–semiconductor (CMOS) camera. With perspective transformation and calibration, our surgical navigation system is able to achieve an overall imaging speed higher than 60 frames per second, with a latency of 330 ms, a spatial sensitivity better than 0.5 mm in both vertical and horizontal directions, and a projection bias less than 1 mm. The technical feasibility of image-guided surgery is demonstrated in both agar-agar gel phantoms and an ex vivo chicken breast model embedding Indocyanine Green (ICG). The biological utility of the system is demonstrated in vivo in a classic model of ICG hepatic metabolism. Our benchtop, ex vivo and in vivo experiments demonstrate the clinical potential for intraoperative delineation of disease margin and image-guided resection surgery. PMID:27391764
Prada, F; Del Bene, M; Mattei, L; Lodigiani, L; DeBeni, S; Kolev, V; Vetrano, I; Solbiati, L; Sakas, G; DiMeco, F
2015-04-01
Brain shift and tissue deformation during surgery for intracranial lesions are the main actual limitations of neuro-navigation (NN), which currently relies mainly on preoperative imaging. Ultrasound (US), being a real-time imaging modality, is becoming progressively more widespread during neurosurgical procedures, but most neurosurgeons, trained on axial computed tomography (CT) and magnetic resonance imaging (MRI) slices, lack specific US training and have difficulties recognizing anatomic structures with the same confidence as in preoperative imaging. Therefore real-time intraoperative fusion imaging (FI) between preoperative imaging and intraoperative ultrasound (ioUS) for virtual navigation (VN) is highly desirable. We describe our procedure for real-time navigation during surgery for different cerebral lesions. We performed fusion imaging with virtual navigation for patients undergoing surgery for brain lesion removal using an ultrasound-based real-time neuro-navigation system that fuses intraoperative cerebral ultrasound with preoperative MRI and simultaneously displays an MRI slice coplanar to an ioUS image. 58 patients underwent surgery at our institution for intracranial lesion removal with image guidance using a US system equipped with fusion imaging for neuro-navigation. In all cases the initial (external) registration error obtained by the corresponding anatomical landmark procedure was below 2 mm and the craniotomy was correctly placed. The transdural window gave satisfactory US image quality and the lesion was always detectable and measurable on both axes. Brain shift/deformation correction has been successfully employed in 42 cases to restore the co-registration during surgery. The accuracy of ioUS/MRI fusion/overlapping was confirmed intraoperatively under direct visualization of anatomic landmarks and the error was < 3 mm in all cases (100 %). Neuro-navigation using intraoperative US integrated with preoperative MRI is reliable, accurate and user-friendly. Moreover, the adjustments are very helpful in correcting brain shift and tissue distortion. This integrated system allows true real-time feedback during surgery and is less expensive and time-consuming than other intraoperative imaging techniques, offering high precision and orientation. © Georg Thieme Verlag KG Stuttgart · New York.
DEMS - a second generation diabetes electronic management system.
Gorman, C A; Zimmerman, B R; Smith, S A; Dinneen, S F; Knudsen, J B; Holm, D; Jorgensen, B; Bjornsen, S; Planet, K; Hanson, P; Rizza, R A
2000-06-01
Diabetes electronic management system (DEMS) is a component-based client/server application, written in Visual C++ and Visual Basic, with the database server running Sybase System 11. DEMS is built entirely with a combination of dynamic link libraries (DLLs) and ActiveX components - the only exception is the DEMS.exe. DEMS is a chronic disease management system for patients with diabetes. It is used at the point of care by all members of the diabetes team including physicians, nurses, dieticians, clinical assistants and educators. The system is designed for maximum clinical efficiency and facilitates appropriately supervised delegation of care. Dispersed clinical sites may be supervised from a central location. The system is designed for ease of navigation; immediate provision of many types of automatically generated reports; quality audits; aids to compliance with good care guidelines; and alerts, advisories, prompts, and warnings that guide the care provider. The system now contains data on over 34000 patients and is in daily use at multiple sites.
CellLineNavigator: a workbench for cancer cell line analysis
Krupp, Markus; Itzel, Timo; Maass, Thorsten; Hildebrandt, Andreas; Galle, Peter R.; Teufel, Andreas
2013-01-01
The CellLineNavigator database, freely available at http://www.medicalgenomics.org/celllinenavigator, is a web-based workbench for large scale comparisons of a large collection of diverse cell lines. It aims to support experimental design in the fields of genomics, systems biology and translational biomedical research. Currently, this compendium holds genome wide expression profiles of 317 different cancer cell lines, categorized into 57 different pathological states and 28 individual tissues. To enlarge the scope of CellLineNavigator, the database was furthermore closely linked to commonly used bioinformatics databases and knowledge repositories. To ensure easy data access and search ability, a simple data and an intuitive querying interface were implemented. It allows the user to explore and filter gene expression, focusing on pathological or physiological conditions. For a more complex search, the advanced query interface may be used to query for (i) differentially expressed genes; (ii) pathological or physiological conditions; or (iii) gene names or functional attributes, such as Kyoto Encyclopaedia of Genes and Genomes pathway maps. These queries may also be combined. Finally, CellLineNavigator allows additional advanced analysis of differentially regulated genes by a direct link to the Database for Annotation, Visualization and Integrated Discovery (DAVID) Bioinformatics Resources. PMID:23118487
LOD map--A visual interface for navigating multiresolution volume visualization.
Wang, Chaoli; Shen, Han-Wei
2006-01-01
In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.
Sunkara, Adhira
2015-01-01
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417
Visual Homing in the Absence of Feature-Based Landmark Information
ERIC Educational Resources Information Center
Gillner, Sabine; Weiss, Anja M.; Mallot, Hanspeter A.
2008-01-01
Despite that fact that landmarks play a prominent role in human navigation, experimental evidence on how landmarks are selected and defined by human navigators remains elusive. Indeed, the concept of a "landmark" is itself not entirely clear. In everyday language, the term landmark refers to salient, distinguishable, and usually nameable objects,…
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Aircraft communication and navigation....S.-REGISTERED AIRCRAFT ENGAGED IN COMMON CARRIAGE General § 129.17 Aircraft communication and... accuracy required for ATC; (ii) One marker beacon receiver providing visual and aural signals; and (iii...
Interactive knowledge networks for interdisciplinary course navigation within Moodle.
Scherl, Andre; Dethleffsen, Kathrin; Meyer, Michael
2012-12-01
Web-based hypermedia learning environments are widely used in modern education and seem particularly well suited for interdisciplinary learning. Previous work has identified guidance through these complex environments as a crucial problem of their acceptance and efficiency. We reasoned that map-based navigation might provide straightforward and effortless orientation. To achieve this, we developed a clickable and user-oriented concept map-based navigation plugin. This tool is implemented as an extension of Moodle, a widely used learning management system. It visualizes inner and interdisciplinary relations between learning objects and is generated dynamically depending on user set parameters and interactions. This plugin leaves the choice of navigation type to the user and supports direct guidance. Previously developed and evaluated face-to-face interdisciplinary learning materials bridging physiology and physics courses of a medical curriculum were integrated as learning objects, the relations of which were defined by metadata. Learning objects included text pages, self-assessments, videos, animations, and simulations. In a field study, we analyzed the effects of this learning environment on physiology and physics knowledge as well as the transfer ability of third-term medical students. Data were generated from pre- and posttest questionnaires and from tracking student navigation. Use of the hypermedia environment resulted in a significant increase of knowledge and transfer capability. Furthermore, the efficiency of learning was enhanced. We conclude that hypermedia environments based on Moodle and enriched by concept map-based navigation tools can significantly support interdisciplinary learning. Implementation of adaptivity may further strengthen this approach.
Evaluation of navigation interfaces in virtual environments
NASA Astrophysics Data System (ADS)
Mestre, Daniel R.
2014-02-01
When users are immersed in cave-like virtual reality systems, navigational interfaces have to be used when the size of the virtual environment becomes larger than the physical extent of the cave floor. However, using navigation interfaces, physically static users experience self-motion (visually-induced vection). As a consequence, sensorial incoherence between vision (indicating self-motion) and other proprioceptive inputs (indicating immobility) can make them feel dizzy and disoriented. We tested, in two experimental studies, different locomotion interfaces. The objective was twofold: testing spatial learning and cybersickness. In a first experiment, using first-person navigation with a flystick ®, we tested the effect of sensorial aids, a spatialized sound or guiding arrows on the ground, attracting the user toward the goal of the navigation task. Results revealed that sensorial aids tended to impact negatively spatial learning. Moreover, subjects reported significant levels of cybersickness. In a second experiment, we tested whether such negative effects could be due to poorly controlled rotational motion during simulated self-motion. Subjects used a gamepad, in which rotational and translational displacements were independently controlled by two joysticks. Furthermore, we tested first- versus third-person navigation. No significant difference was observed between these two conditions. Overall, cybersickness tended to be lower, as compared to experiment 1, but the difference was not significant. Future research should evaluate further the hypothesis of the role of passively perceived optical flow in cybersickness, but manipulating the virtual environment'sperrot structure. It also seems that video-gaming experience might be involved in the user's sensitivity to cybersickness.
BNDB - the Biochemical Network Database.
Küntzer, Jan; Backes, Christina; Blum, Torsten; Gerasch, Andreas; Kaufmann, Michael; Kohlbacher, Oliver; Lenhof, Hans-Peter
2007-10-02
Technological advances in high-throughput techniques and efficient data acquisition methods have resulted in a massive amount of life science data. The data is stored in numerous databases that have been established over the last decades and are essential resources for scientists nowadays. However, the diversity of the databases and the underlying data models make it difficult to combine this information for solving complex problems in systems biology. Currently, researchers typically have to browse several, often highly focused, databases to obtain the required information. Hence, there is a pressing need for more efficient systems for integrating, analyzing, and interpreting these data. The standardization and virtual consolidation of the databases is a major challenge resulting in a unified access to a variety of data sources. We present the Biochemical Network Database (BNDB), a powerful relational database platform, allowing a complete semantic integration of an extensive collection of external databases. BNDB is built upon a comprehensive and extensible object model called BioCore, which is powerful enough to model most known biochemical processes and at the same time easily extensible to be adapted to new biological concepts. Besides a web interface for the search and curation of the data, a Java-based viewer (BiNA) provides a powerful platform-independent visualization and navigation of the data. BiNA uses sophisticated graph layout algorithms for an interactive visualization and navigation of BNDB. BNDB allows a simple, unified access to a variety of external data sources. Its tight integration with the biochemical network library BN++ offers the possibility for import, integration, analysis, and visualization of the data. BNDB is freely accessible at http://www.bndb.org.
2004-01-01
The Medical Advisory Secretariat undertook a review of the evidence on the effectiveness and cost-effectiveness of computer assisted hip and knee arthroplasty. The two computer assisted arthroplasty systems that are the topics of this review are (1) navigation and (2) robotic-assisted hip and knee arthroplasty. Computer-assisted arthroplasty consists of navigation and robotic systems. Surgical navigation is a visualization system that provides positional information about surgical tools or implants relative to a target bone on a computer display. Most of the navigation-assisted arthroplasty devices that are the subject of this review are licensed by Health Canada. Robotic systems are active robots that mill bone according to information from a computer-assisted navigation system. The robotic-assisted arthroplasty devices that are the subject of this review are not currently licensed by Health Canada. The Cochrane and International Network of Agencies for Health Technology Assessment databases did not identify any health technology assessments on navigation or robotic-assisted hip or knee arthroplasty. The MEDLINE and EMBASE databases were searched for articles published between January 1, 1996 and November 30, 2003. This search produced 367 studies, of which 9 met the inclusion criteria. NAVIGATION-ASSISTED ARTHROPLASTY: Five studies were identified that examined navigation-assisted arthroplasty.A Level 1 evidence study from Germany found a statistically significant difference in alignment and angular deviation between navigation-assisted and free-hand total knee arthroplasty in favour of navigation-assisted surgery. However, the endpoints in this study were short-term. To date, the long-term effects (need for revision, implant longevity, pain, functional performance) are unknown.(1)A Level 2 evidence short-term study found that navigation-assisted total knee arthroplasty was significantly better than a non-navigated procedure for one of five postoperative measured angles.(2)A Level 2 evidence short-term study found no statistically significant difference in the variation of the abduction angle between navigation-assisted and conventional total hip arthroplasty.(3)Level 3 evidence observational studies of navigation-assisted total knee arthroplasty and unicompartmental knee arthroplasty have been conducted. Two studies reported that "the follow-up of the navigated prostheses is currently too short to know if clinical outcome or survival rates are improved. Longer follow-up is required to determine the respective advantages and disadvantages of both techniques."(4;5) ROBOTIC-ASSISTED ARTHROPLASTY: Four studies were identified that examined robotic-assisted arthroplasty.A Level 1 evidence study revealed that there was no statistically significant difference between functional hip scores at 24 months post implantation between patients who underwent robotic-assisted primary hip arthroplasty and those that were treated with manual implantation.(6)Robotic-assisted arthroplasty had advantages in terms of preoperative planning and the accuracy of the intraoperative procedure.(6)Patients who underwent robotic-assisted hip arthroplasty had a higher dislocation rate and more revisions.(6)Robotic-assisted arthroplasty may prove effective with certain prostheses (e.g., anatomic) because their use may result in less muscle detachment.(6)An observational study (Level 3 evidence) found that the incidence of severe embolic events during hip relocation was lower with robotic arthroplasty than with manual surgery.(7)An observational study (Level 3 evidence) found that there was no significant difference in gait analyses of patients who underwent robotic-assisted total hip arthroplasty using robotic surgery compared to patients who were treated with conventional cementless total hip arthroplasty.(8)An observational study (Level 3 evidence) compared outcomes of total knee arthroplasty between patients undergoing robotic surgery and patients who were historical controls. Brief, qualitative results suggested that there was much broader variation of angles after manual total knee arthroplasty compared to the robotic technique and that there was no difference in knee functional scores or implant position at the 3 and 6 month follow-up.(9).
Computer-Assisted Hip and Knee Arthroplasty. Navigation and Active Robotic Systems
2004-01-01
Executive Summary Objective The Medical Advisory Secretariat undertook a review of the evidence on the effectiveness and cost-effectiveness of computer assisted hip and knee arthroplasty. The two computer assisted arthroplasty systems that are the topics of this review are (1) navigation and (2) robotic-assisted hip and knee arthroplasty. The Technology Computer-assisted arthroplasty consists of navigation and robotic systems. Surgical navigation is a visualization system that provides positional information about surgical tools or implants relative to a target bone on a computer display. Most of the navigation-assisted arthroplasty devices that are the subject of this review are licensed by Health Canada. Robotic systems are active robots that mill bone according to information from a computer-assisted navigation system. The robotic-assisted arthroplasty devices that are the subject of this review are not currently licensed by Health Canada. Review Strategy The Cochrane and International Network of Agencies for Health Technology Assessment databases did not identify any health technology assessments on navigation or robotic-assisted hip or knee arthroplasty. The MEDLINE and EMBASE databases were searched for articles published between January 1, 1996 and November 30, 2003. This search produced 367 studies, of which 9 met the inclusion criteria. Summary of Findings Navigation-Assisted Arthroplasty Five studies were identified that examined navigation-assisted arthroplasty. A Level 1 evidence study from Germany found a statistically significant difference in alignment and angular deviation between navigation-assisted and free-hand total knee arthroplasty in favour of navigation-assisted surgery. However, the endpoints in this study were short-term. To date, the long-term effects (need for revision, implant longevity, pain, functional performance) are unknown.(1) A Level 2 evidence short-term study found that navigation-assisted total knee arthroplasty was significantly better than a non-navigated procedure for one of five postoperative measured angles.(2) A Level 2 evidence short-term study found no statistically significant difference in the variation of the abduction angle between navigation-assisted and conventional total hip arthroplasty.(3) Level 3 evidence observational studies of navigation-assisted total knee arthroplasty and unicompartmental knee arthroplasty have been conducted. Two studies reported that “the follow-up of the navigated prostheses is currently too short to know if clinical outcome or survival rates are improved. Longer follow-up is required to determine the respective advantages and disadvantages of both techniques.”(4;5) Robotic-Assisted Arthroplasty Four studies were identified that examined robotic-assisted arthroplasty. A Level 1 evidence study revealed that there was no statistically significant difference between functional hip scores at 24 months post implantation between patients who underwent robotic-assisted primary hip arthroplasty and those that were treated with manual implantation.(6) Robotic-assisted arthroplasty had advantages in terms of preoperative planning and the accuracy of the intraoperative procedure.(6) Patients who underwent robotic-assisted hip arthroplasty had a higher dislocation rate and more revisions.(6) Robotic-assisted arthroplasty may prove effective with certain prostheses (e.g., anatomic) because their use may result in less muscle detachment.(6) An observational study (Level 3 evidence) found that the incidence of severe embolic events during hip relocation was lower with robotic arthroplasty than with manual surgery.(7) An observational study (Level 3 evidence) found that there was no significant difference in gait analyses of patients who underwent robotic-assisted total hip arthroplasty using robotic surgery compared to patients who were treated with conventional cementless total hip arthroplasty.(8) An observational study (Level 3 evidence) compared outcomes of total knee arthroplasty between patients undergoing robotic surgery and patients who were historical controls. Brief, qualitative results suggested that there was much broader variation of angles after manual total knee arthroplasty compared to the robotic technique and that there was no difference in knee functional scores or implant position at the 3 and 6 month follow-up.(9) PMID:23074452
Towers, John; Burgess-Limerick, Robin; Riek, Stephan
2014-12-01
The aim of this study was to enable the head-up monitoring of two interrelated aircraft navigation instruments by developing a 3-D auditory display that encodes this navigation information within two spatially discrete sonifications. Head-up monitoring of aircraft navigation information utilizing 3-D audio displays, particularly involving concurrently presented sonifications, requires additional research. A flight simulator's head-down waypoint bearing and course deviation instrument readouts were conveyed to participants via a 3-D auditory display. Both readouts were separately represented by a colocated pair of continuous sounds, one fixed and the other varying in pitch, which together encoded the instrument value's deviation from the norm. Each sound pair's position in the listening space indicated the left/right parameter of its instrument's readout. Participants' accuracy in navigating a predetermined flight plan was evaluated while performing a head-up task involving the detection of visual flares in the out-of-cockpit scene. The auditory display significantly improved aircraft heading and course deviation accuracy, head-up time, and flare detections. Head tracking did not improve performance by providing participants with the ability to orient potentially conflicting sounds, suggesting that the use of integrated localizing cues was successful. Conclusion: A supplementary 3-D auditory display enabled effective head-up monitoring of interrelated navigation information normally attended to through a head-down display. Pilots operating aircraft, such as helicopters and unmanned aerial vehicles, may benefit from a supplementary auditory display because they navigate in two dimensions while performing head-up, out-of-aircraft, visual tasks.
Translations on USSR Science and Technology, Physical Sciences and Technology, Number 16
1977-08-05
34INVESTIGATION OF SPLITTING OF LIGHT NUCLEI WITH HIGH-ENERGY y -RAYS WITH THE METHOD OF WILSON’S CHAMBER OPERATING IN POWERFUL BEAMS OF ELECTRONIC...boast high reliability, high speed, and extremely modest power requirements. Information oh the Screen Visual display devices greatly facilitate...area of application of these units Includes navigation, control of power systems, machine tools, and manufac- turing processes. Th» ^»abilities of
NAVO MSRC Navigator. Spring 2003
2003-01-01
computational model run on the IBM POWER4 (MARCELLUS) in support of the Airborne Laser Challenge Project II. The data were visualized using Alias|Wavefront Maya...Turbulence in a Jet Stream in the Airborne Laser Context High Performance Computing 11 Largest NAVO MSRC System Becomes Even Bigger and Better 11 Using the smp...centimeters (cm). The resolution requirement to resolve the microjets and the flow outside in the combustor is too severe for any single numerical method
North Pacific Omega Navigation System Validation.
1981-12-31
based upon comparisons with "Loran-C, radar and visual" whose absolute accuracy as references could not be assessed. Similarly, the M/S Nopal Lane...Mellon ..................................... A-82 A5.1.5 M/S Nopal Lane ................... ......... .... .. A-83 A5.1.6 Submarine Omega Performance...A-39 A2-23 OmegaNaph an o Signal Coverage................ o........ A-40 A2-27 Ositaeuion fPSibl Moaverfene ... or.... o .. ga A4 N225
OmicsNet: a web-based tool for creation and visual analysis of biological networks in 3D space.
Zhou, Guangyan; Xia, Jianguo
2018-06-07
Biological networks play increasingly important roles in omics data integration and systems biology. Over the past decade, many excellent tools have been developed to support creation, analysis and visualization of biological networks. However, important limitations remain: most tools are standalone programs, the majority of them focus on protein-protein interaction (PPI) or metabolic networks, and visualizations often suffer from 'hairball' effects when networks become large. To help address these limitations, we developed OmicsNet - a novel web-based tool that allows users to easily create different types of molecular interaction networks and visually explore them in a three-dimensional (3D) space. Users can upload one or multiple lists of molecules of interest (genes/proteins, microRNAs, transcription factors or metabolites) to create and merge different types of biological networks. The 3D network visualization system was implemented using the powerful Web Graphics Library (WebGL) technology that works natively in most major browsers. OmicsNet supports force-directed layout, multi-layered perspective layout, as well as spherical layout to help visualize and navigate complex networks. A rich set of functions have been implemented to allow users to perform coloring, shading, topology analysis, and enrichment analysis. OmicsNet is freely available at http://www.omicsnet.ca.
Indexing and retrieval of multimedia objects at different levels of granularity
NASA Astrophysics Data System (ADS)
Faudemay, Pascal; Durand, Gwenael; Seyrat, Claude; Tondre, Nicolas
1998-10-01
Intelligent access to multimedia databases for `naive user' should probably be based on queries formulation by `intelligent agents'. These agents should `understand' the semantics of the contents, learn user preferences and deliver to the user a subset of the source contents, for further navigation. The goal of such systems should be to enable `zero-command' access to the contents, while keeping the freedom of choice of the user. Such systems should interpret multimedia contents in terms of multiple audiovisual objects (from video to visual or audio object), and on actions and scenarios.
Event Display for the Visualization of CMS Events
NASA Astrophysics Data System (ADS)
Bauerdick, L. A. T.; Eulisse, G.; Jones, C. D.; Kovalskyi, D.; McCauley, T.; Mrak Tadel, A.; Muelmenstaedt, J.; Osborne, I.; Tadel, M.; Tu, Y.; Yagil, A.
2011-12-01
During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.
Comparison of Two Electromagnetic Navigation Systems For CT-Guided Punctures: A Phantom Study.
Putzer, D; Arco, D; Schamberger, B; Schanda, F; Mahlknecht, J; Widmann, G; Schullian, P; Jaschke, W; Bale, R
2016-05-01
We compared the targeting accuracy and reliability of two different electromagnetic navigation systems for manually guided punctures in a phantom. CT data sets of a gelatin filled plexiglass phantom were acquired with 1, 3, and 5 mm slice thickness. After paired-point registration of the phantom, a total of 480 navigated stereotactic needle insertions were performed manually using electromagnetic guidance with two different navigation systems (Medtronic Stealth Station: AxiEM; Philips: PercuNav). A control CT was obtained to measure the target positioning error between the planned and actual needle trajectory. Using the Philips PercuNav, the accomplished Euclidean distances were 4.42 ± 1.33 mm, 4.26 ± 1.32 mm, and 4.46 ± 1.56 mm at a slice thickness of 1, 3, and 5 mm, respectively. The mean lateral positional errors were 3.84 ± 1.59 mm, 3.84 ± 1.43 mm, and 3.81 ± 1.71 mm, respectively. Using the Medtronic Stealth Station AxiEM, the Euclidean distances were 3.86 ± 2.28 mm, 3.74 ± 2.1 mm, and 4.81 ± 2.07 mm at a slice thickness of 1, 3, and 5 mm, respectively. The mean lateral positional errors were 3.29 ± 1.52 mm, 3.16 ± 1.52 mm, and 3.93 ± 1.68 mm, respectively. Both electromagnetic navigation devices showed excellent results regarding puncture accuracy in a phantom model. The Medtronic Stealth Station AxiEM provided more accurate results in comparison to the Philips PercuNav for CT with 3 mm slice thickness. One potential benefit of electromagnetic navigation devices is the absence of visual contact between the instrument and the sensor system. Due to possible interference with metal objects, incorrect position sensing may occur. In contrast to the phantom study, patient movement including respiration has to be compensated for in the clinical setting. • Commercially available electromagnetic navigation systems have the potential to improve the therapeutic range for CT guided percutaneous procedures by comparing the needle placement accuracy on the basis of planning CT data sets with different slice thickness. Citation Format: • Putzer D, Arco D, Schamberger B et al. Comparison of Two Electromagnetic Navigation Systems For CT-Guided Punctures: A Phantom Study. Fortschr Röntgenstr 2016; 188: 470 - 478. © Georg Thieme Verlag KG Stuttgart · New York.
Evaluating the Usability of Pinchigator, a system for Navigating Virtual Worlds using Pinch Gloves
NASA Technical Reports Server (NTRS)
Hamilton, George S.; Brookman, Stephen; Dumas, Joseph D. II; Tilghman, Neal
2003-01-01
Appropriate design of two dimensional user interfaces (2D U/I) utilizing the well known WIMP (Window, Icon, Menu, Pointing device) environment for computer software is well studied and guidance can be found in several standards. Three-dimensional U/I design is not nearly so mature as 2D U/I, and standards bodies have not reached consensus on what makes a usable interface. This is especially true when the tools for interacting with the virtual environment may include stereo viewing, real time trackers and pinch gloves instead of just a mouse & keyboard. Over the last several years the authors have created a 3D U/I system dubbed Pinchigator for navigating virtual worlds based on the dVise dV/Mockup visualization software, Fakespace Pinch Gloves and Pohlemus trackers. The current work is to test the usability of the system on several virtual worlds, suggest improvements to increase Pinchigator s usability, and then to generalize about what was learned and how those lessons might be applied to improve other 3D U/I systems.
Web-based visualization of very large scientific astronomy imagery
NASA Astrophysics Data System (ADS)
Bertin, E.; Pillay, R.; Marmo, C.
2015-04-01
Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of datasets. It provides access to floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.
Biomimetic MEMS sensor array for navigation and water detection
NASA Astrophysics Data System (ADS)
Futterknecht, Oliver; Macqueen, Mark O.; Karman, Salmah; Diah, S. Zaleha M.; Gebeshuber, Ille C.
2013-05-01
The focus of this study is biomimetic concept development for a MEMS sensor array for navigation and water detection. The MEMS sensor array is inspired by abstractions of the respective biological functions: polarized skylight-based navigation sensors in honeybees (Apis mellifera) and the ability of African elephants (Loxodonta africana) to detect water. The focus lies on how to navigate to and how to detect water sources in desert-like or remote areas. The goal is to develop a sensor that can provide both, navigation clues and help in detecting nearby water sources. We basically use the information provided by the natural polarization pattern produced by the sunbeams scattered within the atmosphere combined with the capability of the honeybee's compound eye to extrapolate the navigation information. The detection device uses light beam reactive MEMS, which are capable to detect the skylight polarization based on the Rayleigh sky model. For water detection we present various possible approaches to realize the sensor. In the first approach, polarization is used: moisture saturated areas near ground have a small but distinctively different effect on scattering and polarizing light than less moist ones. Modified skylight polarization sensors (Karman, Diah and Gebeshuber, 2012) are used to visualize this small change in scattering. The second approach is inspired by the ability of elephants to detect infrasound produced by underground water reservoirs, and shall be used to determine the location of underground rivers and visualize their exact routes.
Construction of Cognitive Maps to Improve E-Book Reading and Navigation
ERIC Educational Resources Information Center
Li, Liang-Yi; Chen, Gwo-Dong; Yang, Sheng-Jie
2013-01-01
People have greater difficulty reading academic textbooks on screen than on paper. One notable problem is that they cannot construct an effective cognitive map because of the lack of contextual information cues and ineffective navigational mechanisms in e-books. To support the construction of cognitive maps, this paper proposes the visual cue map,…
Familiar route loyalty implies visual pilotage in the homing pigeon
Biro, Dora; Meade, Jessica; Guilford, Tim
2004-01-01
Wide-ranging animals, such as birds, regularly traverse large areas of the landscape efficiently in the course of their local movement patterns, which raises fundamental questions about the cognitive mechanisms involved. By using precision global-positioning-system loggers, we show that homing pigeons (Columba livia) not only come to rely on highly stereotyped yet surprisingly inefficient routes within the local area but are attracted directly back to their individually preferred routes even when released from novel sites off-route. This precise route loyalty demonstrates a reliance on familiar landmarks throughout the flight, which was unexpected under current models of avian navigation. We discuss how visual landmarks may be encoded as waypoints within familiar route maps. PMID:15572457
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blattner, M M; Blattner, D O; Tong, Y
1999-04-01
Easy-to-use interfaces are a class of interfaces that fall between public access interfaces and graphical user interfaces in usability and cognitive difficulty. We describe characteristics of easy-to-use interfaces by the properties of four dimensions: selection, navigation, direct manipulation, and contextual metaphors. Another constraint we introduced was to include as little text as possible, and what text we have will be in at least four languages. Formative evaluations were conducted to identify and isolate these characteristics. Our application is a visual interface for a home automation system intended for a diverse set of users. The design will be expanded to accommodatemore » the visually disabled in the near future.« less
Design and application of BIM based digital sand table for construction management
NASA Astrophysics Data System (ADS)
Fuquan, JI; Jianqiang, LI; Weijia, LIU
2018-05-01
This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.
Technical aspects of virtual liver resection planning.
Glombitza, G; Lamadé, W; Demiris, A M; Göpfert, M R; Mayer, A; Bahner, M L; Meinzer, H P; Richter, G; Lehnert, T; Herfarth, C
1998-01-01
Operability of a liver tumor is depending on its three dimensional relation to the intrahepatic vascular trees which define autonomously functioning liver (sub-)segments. Precise operation planning is complicated by anatomic variability, distortion of the vascular trees by the tumor or preceding liver resections. Because of the missing possibility to track the deformation of the liver during the operation an integration of the resection planning system into an intra-operative navigation system is not feasible. So the main task of an operation planning system in this domain is a quantifiable patient selection by exact prediction of post-operative liver function and a quantifiable resection proposal. The system quantifies the organ structures and resection volumes by means of absolute and relative values. It defines resection planes depending on security margins and the vascular trees and presents the data in visualized form as a 3D movie. The new 3D operation planning system offers quantifiable liver resection proposals based on individualized liver anatomy. The results are visualized in digital movies as well as in quantitative reports.
Navigation system for robot-assisted intra-articular lower-limb fracture surgery.
Dagnino, Giulio; Georgilas, Ioannis; Köhler, Paul; Morad, Samir; Atkins, Roger; Dogramadzi, Sanja
2016-10-01
In the surgical treatment for lower-leg intra-articular fractures, the fragments have to be positioned and aligned to reconstruct the fractured bone as precisely as possible, to allow the joint to function correctly again. Standard procedures use 2D radiographs to estimate the desired reduction position of bone fragments. However, optimal correction in a 3D space requires 3D imaging. This paper introduces a new navigation system that uses pre-operative planning based on 3D CT data and intra-operative 3D guidance to virtually reduce lower-limb intra-articular fractures. Physical reduction in the fractures is then performed by our robotic system based on the virtual reduction. 3D models of bone fragments are segmented from CT scan. Fragments are pre-operatively visualized on the screen and virtually manipulated by the surgeon through a dedicated GUI to achieve the virtual reduction in the fracture. Intra-operatively, the actual position of the bone fragments is provided by an optical tracker enabling real-time 3D guidance. The motion commands for the robot connected to the bone fragment are generated, and the fracture physically reduced based on the surgeon's virtual reduction. To test the system, four femur models were fractured to obtain four different distal femur fracture types. Each one of them was subsequently reduced 20 times by a surgeon using our system. The navigation system allowed an orthopaedic surgeon to virtually reduce the fracture with a maximum residual positioning error of [Formula: see text] (translational) and [Formula: see text] (rotational). Correspondent physical reductions resulted in an accuracy of 1.03 ± 0.2 mm and [Formula: see text], when the robot reduced the fracture. Experimental outcome demonstrates the accuracy and effectiveness of the proposed navigation system, presenting a fracture reduction accuracy of about 1 mm and [Formula: see text], and meeting the clinical requirements for distal femur fracture reduction procedures.
DBMap: a TreeMap-based framework for data navigation and visualization of brain research registry
NASA Astrophysics Data System (ADS)
Zhang, Ming; Zhang, Hong; Tjandra, Donny; Wong, Stephen T. C.
2003-05-01
The purpose of this study is to investigate and apply a new, intuitive and space-conscious visualization framework to facilitate efficient data presentation and exploration of large-scale data warehouses. We have implemented the DBMap framework for the UCSF Brain Research Registry. Such a novel utility would facilitate medical specialists and clinical researchers in better exploring and evaluating a number of attributes organized in the brain research registry. The current UCSF Brain Research Registry consists of a federation of disease-oriented database modules, including Epilepsy, Brain Tumor, Intracerebral Hemorrphage, and CJD (Creuzfeld-Jacob disease). These database modules organize large volumes of imaging and non-imaging data to support Web-based clinical research. While the data warehouse supports general information retrieval and analysis, there lacks an effective way to visualize and present the voluminous and complex data stored. This study investigates whether the TreeMap algorithm can be adapted to display and navigate categorical biomedical data warehouse or registry. TreeMap is a space constrained graphical representation of large hierarchical data sets, mapped to a matrix of rectangles, whose size and color represent interested database fields. It allows the display of a large amount of numerical and categorical information in limited real estate of computer screen with an intuitive user interface. The paper will describe, DBMap, the proposed new data visualization framework for large biomedical databases. Built upon XML, Java and JDBC technologies, the prototype system includes a set of software modules that reside in the application server tier and provide interface to backend database tier and front-end Web tier of the brain registry.
NASA Astrophysics Data System (ADS)
Maki, Toshihiro; Ura, Tamaki; Singh, Hanumant; Sakamaki, Takashi
Large-area seafloor imaging will bring significant benefits to various fields such as academics, resource survey, marine development, security, and search-and-rescue. The authors have proposed a navigation method of an autonomous underwater vehicle for seafloor imaging, and verified its performance through mapping tubeworm colonies with the area of 3,000 square meters using the AUV Tri-Dog 1 at Tagiri vent field, Kagoshima bay in Japan (Maki et al., 2008, 2009). This paper proposes a post-processing method to build a natural photo mosaic from a number of pictures taken by an underwater platform. The method firstly removes lens distortion, invariances of color and lighting from each image, and then ortho-rectification is performed based on camera pose and seafloor estimated by navigation data. The image alignment is based on both navigation data and visual characteristics, implemented as an expansion of the image based method (Pizarro et al., 2003). Using the two types of information realizes an image alignment that is consistent both globally and locally, as well as making the method applicable to data sets with little visual keys. The method was evaluated using a data set obtained by the AUV Tri-Dog 1 at the vent field in Sep. 2009. A seamless, uniformly illuminated photo mosaic covering the area of around 500 square meters was created from 391 pictures, which covers unique features of the field such as bacteria mats and tubeworm colonies.
Goodlett, C R; Hamre, K M; West, J R
1992-04-10
Spatial learning in rodents requires normal functioning of hippocampal and cortical structures. Recent data suggest that the cerebellum may also be essential. Neurological mutant mice with dysgenesis of the cerebellum provide useful models to examine the effects of abnormal cerebellar function. Mice with one such mutation, Purkinje cell degeneration (pcd), in which Purkinje cells degenerate between the third and fourth postnatal weeks, were evaluated for performance of spatial navigation learning and visual guidance learning in the Morris maze swim-escape task. Unaffected littermates and C57BL/6J mice served as controls. Separate groups of pcd and control mice were tested at 30, 50 and 110 days of age. At all ages, pcd mice had severe deficits in distal-cue (spatial) navigation, failing to decrease path lengths over training and failing to express appropriate spatial biases on probe trials. On the proximal-cue (visual guidance) task, whenever performance differences between groups did occur, they were limited to the initial trials. The ability of the pcd mice to perform the proximal-cue but not the distal-cue task indicates that the massive spatial navigation deficit was not due simply to motor dysfunction. Histological evaluations confirmed that the pcd mutation resulted in Purkinje cell loss without significant depletion of cells in the hippocampal formation. These data provide further evidence that the cerebellum is vital for the expression of behavior directed by spatial cognitive processes.
Assessment of Indoor Route-finding Technology for People with Visual Impairment
Kalia, Amy A.; Legge, Gordon E.; Roy, Rudrava; Ogale, Advait
2010-01-01
This study investigated navigation with route instructions generated by digital-map software and synthetic speech. Participants, either visually impaired or sighted wearing blind folds, successfully located rooms in an unfamiliar building. Users with visual impairment demonstrated better route-finding performance when the technology provided distance information in number of steps rather than walking time or number of feet. PMID:21869851
On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation
2015-03-01
SWIR Short Wave Infrared VisualSFM Visual Structure from Motion WPAFB Wright Patterson Air Force Base xi ON THE INTEGRATION OF MEDIUM WAVE INFRARED...Structure from Motion Visual Structure from Motion ( VisualSFM ) is an application that performs incremental SfM using images fed into it of a scene [20...too drastically in between frames. When this happens, VisualSFM will begin creating a new model with images that do not fit to the old one. These new
NASA Astrophysics Data System (ADS)
Figl, Michael; Birkfellner, Wolfgang; Watzinger, Franz; Wanschitz, Felix; Hummel, Johann; Hanel, Rudolf A.; Ewers, Rolf; Bergmann, Helmar
2002-05-01
Two main concepts of Head Mounted Displays (HMD) for augmented reality (AR) visualization exist, the optical and video-see through type. Several research groups have pursued both approaches for utilizing HMDs for computer aided surgery. While the hardware requirements for a video see through HMD to achieve acceptable time delay and frame rate seem to be enormous the clinical acceptance of such a device is doubtful from a practical point of view. Starting from previous work in displaying additional computer-generated graphics in operating microscopes, we have adapted a miniature head mounted operating microscope for AR by integrating two very small computer displays. To calibrate the projection parameters of this so called Varioscope AR we have used Tsai's Algorithm for camera calibration. Connection to a surgical navigation system was performed by defining an open interface to the control unit of the Varioscope AR. The control unit consists of a standard PC with a dual head graphics adapter to render and display the desired augmentation of the scene. We connected this control unit to a computer aided surgery (CAS) system by the TCP/IP interface. In this paper we present the control unit for the HMD and its software design. We tested two different optical tracking systems, the Flashpoint (Image Guided Technologies, Boulder, CO), which provided about 10 frames per second, and the Polaris (Northern Digital, Ontario, Canada) which provided at least 30 frames per second, both with a time delay of one frame.
The Impact of Inherent Instructional Design in Online Courseware.
ERIC Educational Resources Information Center
Harvey, Douglas M.; Lee, Jung
2001-01-01
Examines how the use of server-based courseware development solutions affects the instructional design process when creating online distance education. Highlights include pedagogical, visual interface (e.g., visual metaphor and navigation layout), interaction, and instructional design implications of online courseware. (Contains 54 references.)…
A conditioned visual orientation requires the ellipsoid body in Drosophila
Guo, Chao; Du, Yifei; Yuan, Deliang; Li, Meixia; Gong, Haiyun; Gong, Zhefeng
2015-01-01
Orientation, the spatial organization of animal behavior, is an essential faculty of animals. Bacteria and lower animals such as insects exhibit taxis, innate orientation behavior, directly toward or away from a directional cue. Organisms can also orient themselves at a specific angle relative to the cues. In this study, using Drosophila as a model system, we established a visual orientation conditioning paradigm based on a flight simulator in which a stationary flying fly could control the rotation of a visual object. By coupling aversive heat shocks to a fly's orientation toward one side of the visual object, we found that the fly could be conditioned to orientate toward the left or right side of the frontal visual object and retain this conditioned visual orientation. The lower and upper visual fields have different roles in conditioned visual orientation. Transfer experiments showed that conditioned visual orientation could generalize between visual targets of different sizes, compactness, or vertical positions, but not of contour orientation. Rut—Type I adenylyl cyclase and Dnc—phosphodiesterase were dispensable for visual orientation conditioning. Normal activity and scb signaling in R3/R4d neurons of the ellipsoid body were required for visual orientation conditioning. Our studies established a visual orientation conditioning paradigm and examined the behavioral properties and neural circuitry of visual orientation, an important component of the insect's spatial navigation. PMID:25512578
Concrete bridge deck early problem detection and mitigation using robotics
NASA Astrophysics Data System (ADS)
Gucunski, Nenad; Yi, Jingang; Basily, Basily; Duong, Trung; Kim, Jinyoung; Balaguru, Perumalsamy; Parvardeh, Hooman; Maher, Ali; Najm, Husam
2015-04-01
More economical management of bridges can be achieved through early problem detection and mitigation. The paper describes development and implementation of two fully automated (robotic) systems for nondestructive evaluation (NDE) and minimally invasive rehabilitation of concrete bridge decks. The NDE system named RABIT was developed with the support from Federal Highway Administration (FHWA). It implements multiple NDE technologies, namely: electrical resistivity (ER), impact echo (IE), ground-penetrating radar (GPR), and ultrasonic surface waves (USW). In addition, the system utilizes advanced vision to substitute traditional visual inspection. The RABIT system collects data at significantly higher speeds than it is done using traditional NDE equipment. The associated platform for the enhanced interpretation of condition assessment in concrete bridge decks utilizes data integration, fusion, and deterioration and defect visualization. The interpretation and visualization platform specifically addresses data integration and fusion from the four NDE technologies. The data visualization platform facilitates an intuitive presentation of the main deterioration due to: corrosion, delamination, and concrete degradation, by integrating NDE survey results and high resolution deck surface imaging. The rehabilitation robotic system was developed with the support from National Institute of Standards and Technology-Technology Innovation Program (NIST-TIP). The system utilizes advanced robotics and novel materials to repair problems in concrete decks, primarily early stage delamination and internal cracking, using a minimally invasive approach. Since both systems use global positioning systems for navigation, some of the current efforts concentrate on their coordination for the most effective joint evaluation and rehabilitation.
How to find home backwards? Navigation during rearward homing of Cataglyphis fortis desert ants.
Pfeffer, Sarah E; Wittlinger, Matthias
2016-07-15
Cataglyphis ants are renowned for their impressive navigation skills, which have been studied in numerous experiments during forward locomotion. However, the ants' navigational performance during backward homing when dragging large food loads has not been investigated until now. During backward locomotion, the odometer has to deal with unsteady motion and irregularities in inter-leg coordination. The legs' sensory feedback during backward walking is not just a simple reversal of the forward stepping movements: compared with forward homing, ants are facing towards the opposite direction during backward dragging. Hence, the compass system has to cope with a flipped celestial view (in terms of the polarization pattern and the position of the sun) and an inverted retinotopic image of the visual panorama and landmark environment. The same is true for wind and olfactory cues. In this study we analyze for the first time backward-homing ants and evaluate their navigational performance in channel and open field experiments. Backward-homing Cataglyphis fortis desert ants show remarkable similarities in the performance of homing compared with forward-walking ants. Despite the numerous challenges emerging for the navigational system during backward walking, we show that ants perform quite well in our experiments. Direction and distance gauging was comparable to that of the forward-walking control groups. Interestingly, we found that backward-homing ants often put down the food item and performed foodless search loops around the left food item. These search loops were mainly centred around the drop-off position (and not around the nest position), and increased in length the closer the ants came to their fictive nest site. © 2016. Published by The Company of Biologists Ltd.
Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach
Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei
2017-01-01
One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft’s real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach. PMID:28629189
Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach.
Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei
2017-06-19
[-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft's real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach.
NASA Technical Reports Server (NTRS)
Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga P.; Stephens, Philip; Wilson, Brian D.; Akopian, Vardan; Komjathy, Attila; Lijima, Byron A.
2013-01-01
ISOGAME is designed and developed to assess quantitatively the impact of new observation systems on the capability of imaging and modeling the ionosphere. With ISOGAME, one can perform observation system simulation experiments (OSSEs). A typical OSSE using ISOGAME would involve: (1) simulating various ionospheric conditions on global scales; (2) simulating ionospheric measurements made from a constellation of low-Earth-orbiters (LEOs), particularly Global Navigation Satellite System (GNSS) radio occultation data, and from ground-based global GNSS networks; (3) conducting ionospheric data assimilation experiments with the Global Assimilative Ionospheric Model (GAIM); and (4) analyzing modeling results with visualization tools. ISOGAME can provide quantitative assessment of the accuracy of assimilative modeling with the interested observation system. Other observation systems besides those based on GNSS are also possible to analyze. The system is composed of a suite of software that combines the GAIM, including a 4D first-principles ionospheric model and data assimilation modules, an Internal Reference Ionosphere (IRI) model that has been developed by international ionospheric research communities, observation simulator, visualization software, and orbit design, simulation, and optimization software. The core GAIM model used in ISOGAME is based on the GAIM++ code (written in C++) that includes a new high-fidelity geomagnetic field representation (multi-dipole). New visualization tools and analysis algorithms for the OSSEs are now part of ISOGAME.
A lightweight, inexpensive robotic system for insect vision.
Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex
2017-09-01
Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
How Ants Use Vision When Homing Backward.
Schwarz, Sebastian; Mangan, Michael; Zeil, Jochen; Webb, Barbara; Wystrach, Antoine
2017-02-06
Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3-5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories. VIDEO ABSTRACT. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Long Range Navigation for Mars Rovers Using Sensor-Based Path Planning and Visual Localisation
NASA Technical Reports Server (NTRS)
Laubach, Sharon L.; Olson, Clark F.; Burdick, Joel W.; Hayati, Samad
1999-01-01
The Mars Pathfinder mission illustrated the benefits of including a mobile robotic explorer on a planetary mission. However, for future Mars rover missions, significantly increased autonomy in navigation is required in order to meet demanding mission criteria. To address these requirements, we have developed new path planning and localisation capabilities that allow a rover to navigate robustly to a distant landmark. These algorithms have been implemented on the JPL Rocky 7 prototype microrover and have been tested extensively in the JPL MarsYard, as well as in natural terrain.
Large-Area Visually Augmented Navigation for Autonomous Underwater Vehicles
2005-06-01
constrain position drift . Correction of errors in position and orientation are made each time the mosaic is updated, which occurs every Lth video frame. They...are the greatest strength of a VAN methodology. It is these measurements which help to correct dead-reckoned drift error and enforce recovery of a...systems. [INSTRUMENT [VARIABLE I INTENAL? I UPDATE RATE PRECISION FRANGE J DRIFT Acoustic Altimeter Z - Altitude yes varies: 0.1-10 Hz 0.01-1.0 m varies
NASA Astrophysics Data System (ADS)
Moody, Marc; Fisher, Robert; Little, J. Kristin
2014-06-01
Boeing has developed a degraded visual environment navigational aid that is flying on the Boeing AH-6 light attack helicopter. The navigational aid is a two dimensional software digital map underlay generated by the Boeing™ Geospatial Embedded Mapping Software (GEMS) and fully integrated with the operational flight program. The page format on the aircraft's multi function displays (MFD) is termed the Approach page. The existing work utilizes Digital Terrain Elevation Data (DTED) and OpenGL ES 2.0 graphics capabilities to compute the pertinent graphics underlay entirely on the graphics processor unit (GPU) within the AH-6 mission computer. The next release will incorporate cultural databases containing Digital Vertical Obstructions (DVO) to warn the crew of towers, buildings, and power lines when choosing an opportune landing site. Future IRAD will include Light Detection and Ranging (LIDAR) point cloud generating sensors to provide 2D and 3D synthetic vision on the final approach to the landing zone. Collision detection with respect to terrain, cultural, and point cloud datasets may be used to further augment the crew warning system. The techniques for creating the digital map underlay leverage the GPU almost entirely, making this solution viable on most embedded mission computing systems with an OpenGL ES 2.0 capable GPU. This paper focuses on the AH-6 crew interface process for determining a landing zone and flying the aircraft to it.
Understanding human visual systems and its impact on our intelligent instruments
NASA Astrophysics Data System (ADS)
Strojnik Scholl, Marija; Páez, Gonzalo; Scholl, Michelle K.
2013-09-01
We review the evolution of machine vision and comment on the cross-fertilization from the neural sciences onto flourishing fields of neural processing, parallel processing, and associative memory in optical sciences and computing. Then we examine how the intensive efforts in mapping the human brain have been influenced by concepts in computer sciences, control theory, and electronic circuits. We discuss two neural paths that employ the input from the vision sense to determine the navigational options and object recognition. They are ventral temporal pathway for object recognition (what?) and dorsal parietal pathway for navigation (where?), respectively. We describe the reflexive and conscious decision centers in cerebral cortex involved with visual attention and gaze control. Interestingly, these require return path though the midbrain for ocular muscle control. We find that the cognitive psychologists currently study human brain employing low-spatial-resolution fMRI with temporal response on the order of a second. In recent years, the life scientists have concentrated on insect brains to study neural processes. We discuss how reflexive and conscious gaze-control decisions are made in the frontal eye field and inferior parietal lobe, constituting the fronto-parietal attention network. We note that ethical and experiential learnings impact our conscious decisions.
Zhang, Bing; Schmoyer, Denise; Kirov, Stefan; Snoddy, Jay
2004-01-01
Background Microarray and other high-throughput technologies are producing large sets of interesting genes that are difficult to analyze directly. Bioinformatics tools are needed to interpret the functional information in the gene sets. Results We have created a web-based tool for data analysis and data visualization for sets of genes called GOTree Machine (GOTM). This tool was originally intended to analyze sets of co-regulated genes identified from microarray analysis but is adaptable for use with other gene sets from other high-throughput analyses. GOTree Machine generates a GOTree, a tree-like structure to navigate the Gene Ontology Directed Acyclic Graph for input gene sets. This system provides user friendly data navigation and visualization. Statistical analysis helps users to identify the most important Gene Ontology categories for the input gene sets and suggests biological areas that warrant further study. GOTree Machine is available online at . Conclusion GOTree Machine has a broad application in functional genomic, proteomic and other high-throughput methods that generate large sets of interesting genes; its primary purpose is to help users sort for interesting patterns in gene sets. PMID:14975175
Scientific Visualization of Radio Astronomy Data using Gesture Interaction
NASA Astrophysics Data System (ADS)
Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.
2015-09-01
MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.
DOT National Transportation Integrated Search
2013-10-04
Performance based navigation supports the design of more precise flight procedures. However, these new procedures can be visually complex, which may impact the usability of charts that depict the procedures. The purpose of the study was to evaluate w...
Communicating Navigation Data Inside the Cassini-Huygens Project: Visualizations and Tools
NASA Technical Reports Server (NTRS)
Wagner, Sean V.; Gist, Emily M.; Goodson, Troy D.; Hahn, Yungsun; Stumpf, Paul W.; Williams, Powtawche N.
2008-01-01
The Cassini-Huygens Saturn tour poses an interesting navigation challenge. From July 2004 through June 2008, the Cassini orbiter performed 112 of 161 planned maneuvers. This demanding schedule, where maneuvers are often separated by just a few days, motivated the development of maneuver design/analysis automation software tools. Besides generating maneuver designs and presentations, these tools are the mechanism to producing other types of navigation information; information used to facilitate operational decisions on such issues as maneuver cancellation and alternate maneuver strategies. This paper will discuss the navigation data that are communicated inside the Cassini-Huygens Project, as well as the maneuver software tools behind the processing of the data.
Wallet, Grégory; Sauzéon, Hélène; Pala, Prashant Arvind; Larrue, Florian; Zheng, Xia; N'Kaoua, Bernard
2011-01-01
The purpose of this study was to evaluate the effect the visual fidelity of a virtual environment (VE) (undetailed vs. detailed) has on the transfer of spatial knowledge based on the navigation mode (passive vs. active) for three different spatial recall tasks (wayfinding, sketch mapping, and picture sorting). Sixty-four subjects (32 men and 32 women) participated in the experiment. Spatial learning was evaluated by these three tasks in the context of the Bordeaux district. In the wayfinding task, the results indicated that the detailed VE helped subjects to transfer their spatial knowledge from the VE to the real world, irrespective of the navigation mode. In the sketch-mapping task, the detailed VE increased performances compared to the undetailed VE condition, and allowed subjects to benefit from the active navigation. In the sorting task, performances were better in the detailed VE; however, in the undetailed version of the VE, active learning either did not help the subjects or it even deteriorated their performances. These results are discussed in terms of appropriate perceptive-motor and/or spatial representations for each spatial recall task.
Piao, Jin-Chun; Kim, Shin-Dug
2017-11-07
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Intraoperative computed tomography.
Tonn, J C; Schichor, C; Schnell, O; Zausinger, S; Uhl, E; Morhard, D; Reiser, M
2011-01-01
Intraoperative computed tomography (iCT) has gained increasing impact among modern neurosurgical techniques. Multislice CT with a sliding gantry in the OR provides excellent diagnostic image quality in the visualization of vascular lesions as well as bony structures including skull base and spine. Due to short acquisition times and a high spatial and temporal resolution, various modalities such as iCT-angiography, iCT-cerebral perfusion and the integration of intraoperative navigation with automatic re-registration after scanning can be performed. This allows a variety of applications, e.g. intraoperative angiography, intraoperative cerebral perfusion studies, update of cerebral and spinal navigation, stereotactic procedures as well as resection control in tumour surgery. Its versatility promotes its use in a multidisciplinary setting. Radiation exposure is comparable to standard CT systems outside the OR. For neurosurgical purposes, however, new hardware components (e.g. a radiolucent headholder system) had to be developed. Having a different range of applications compared to intraoperative MRI, it is an attractive modality for intraoperative imaging being comparatively easy to install and cost efficient.
Computer-aided evaluation of the railway track geometry on the basis of satellite measurements
NASA Astrophysics Data System (ADS)
Specht, Cezary; Koc, Władysław; Chrostowski, Piotr
2016-05-01
In recent years, all over the world there has been a period of intensive development of GNSS (Global Navigation Satellite Systems) measurement techniques and their extension for the purpose of their applications in the field of surveying and navigation. Moreover, in many countries a rising trend in the development of rail transportation systems has been noticed. In this paper, a method of railway track geometry assessment based on mobile satellite measurements is presented. The paper shows the implementation effects of satellite surveying railway geometry. The investigation process described in the paper is divided on two phases. The first phase is the GNSS mobile surveying and the analysis obtained data. The second phase is the analysis of the track geometry using the flat coordinates from the surveying. The visualization of the measured route, separation and quality assessment of the uniform geometric elements (straight sections, arcs), identification of the track polygon (main directions and intersection angles) are discussed and illustrated by the calculation example within the article.
Mongeau, R; Casu, M A; Pani, L; Pillolla, G; Lianas, L; Giachetti, A
2008-05-01
The vast amount of heterogeneous data generated in various fields of neurosciences such as neuropsychopharmacology can hardly be classified using traditional databases. We present here the concept of a virtual archive, spatially referenced over a simplified 3D brain map and accessible over the Internet. A simple prototype (available at http://aquatics.crs4.it/neuropsydat3d) has been realized using current Web-based virtual reality standards and technologies. It illustrates how primary literature or summary information can easily be retrieved through hyperlinks mapped onto a 3D schema while navigating through neuroanatomy. Furthermore, 3D navigation and visualization techniques are used to enhance the representation of brain's neurotransmitters, pathways and the involvement of specific brain areas in any particular physiological or behavioral functions. The system proposed shows how the use of a schematic spatial organization of data, widely exploited in other fields (e.g. Geographical Information Systems) can be extremely useful to develop efficient tools for research and teaching in neurosciences.
From chemotaxis to the cognitive map: The function of olfaction
Jacobs, Lucia F.
2012-01-01
A paradox of vertebrate brain evolution is the unexplained variability in the size of the olfactory bulb (OB), in contrast to other brain regions, which scale predictably with brain size. Such variability appears to be the result of selection for olfactory function, yet there is no obvious concordance that would predict the causal relationship between OB size and behavior. This discordance may derive from assuming the primary function of olfaction is odorant discrimination and acuity. If instead the primary function of olfaction is navigation, i.e., predicting odorant distributions in time and space, variability in absolute OB size could be ascribed and explained by variability in navigational demand. This olfactory spatial hypothesis offers a single functional explanation to account for patterns of olfactory system scaling in vertebrates, the primacy of olfaction in spatial navigation, even in visual specialists, and proposes an evolutionary scenario to account for the convergence in olfactory structure and function across protostomes and deuterostomes. In addition, the unique percepts of olfaction may organize odorant information in a parallel map structure. This could have served as a scaffold for the evolution of the parallel map structure of the mammalian hippocampus, and possibly the arthropod mushroom body, and offers an explanation for similar flexible spatial navigation strategies in arthropods and vertebrates. PMID:22723365
The use of interactive graphical maps for browsing medical/health Internet information resources
Boulos, Maged N Kamel
2003-01-01
As online information portals accumulate metadata descriptions of Web resources, it becomes necessary to develop effective ways for visualising and navigating the resultant huge metadata repositories as well as the different semantic relationships and attributes of described Web resources. Graphical maps provide a good method to visualise, understand and navigate a world that is too large and complex to be seen directly like the Web. Several examples of maps designed as a navigational aid for Web resources are presented in this review with an emphasis on maps of medical and health-related resources. The latter include HealthCyberMap maps , which can be classified as conceptual information space maps, and the very abstract and geometric Visual Net maps of PubMed (for demos). Information resources can be also organised and navigated based on their geographic attributes. Some of the maps presented in this review use a Kohonen Self-Organising Map algorithm, and only HealthCyberMap uses a Geographic Information System to classify Web resource data and render the maps. Maps based on familiar metaphors taken from users' everyday life are much easier to understand. Associative and pictorial map icons that enable instant recognition and comprehension are preferred to geometric ones and are key to successful maps for browsing medical/health Internet information resources. PMID:12556244
Concept of Operations for Commercial and Business Aircraft Synthetic Vision Systems. 1.0
NASA Technical Reports Server (NTRS)
Williams Daniel M.; Waller, Marvin C.; Koelling, John H.; Burdette, Daniel W.; Capron, William R.; Barry, John S.; Gifford, Richard B.; Doyle, Thomas M.
2001-01-01
A concept of operations (CONOPS) for the Commercial and Business (CaB) aircraft synthetic vision systems (SVS) is described. The CaB SVS is expected to provide increased safety and operational benefits in normal and low visibility conditions. Providing operational benefits will promote SVS implementation in the Net, improve aviation safety, and assist in meeting the national aviation safety goal. SVS will enhance safety and enable consistent gate-to-gate aircraft operations in normal and low visibility conditions. The goal for developing SVS is to support operational minima as low as Category 3b in a variety of environments. For departure and ground operations, the SVS goal is to enable operations with a runway visual range of 300 feet. The system is an integrated display concept that provides a virtual visual environment. The SVS virtual visual environment is composed of three components: an enhanced intuitive view of the flight environment, hazard and obstacle defection and display, and precision navigation guidance. The virtual visual environment will support enhanced operations procedures during all phases of flight - ground operations, departure, en route, and arrival. The applications selected for emphasis in this document include low visibility departures and arrivals including parallel runway operations, and low visibility airport surface operations. These particular applications were selected because of significant potential benefits afforded by SVS.
Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen
2017-07-01
Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.
Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1987-01-01
Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.
Cognitive load of navigating without vision when guided by virtual sound versus spatial language.
Klatzky, Roberta L; Marston, James R; Giudice, Nicholas A; Golledge, Reginald G; Loomis, Jack M
2006-12-01
A vibrotactile N-back task was used to generate cognitive load while participants were guided along virtual paths without vision. As participants stepped in place, they moved along a virtual path of linear segments. Information was provided en route about the direction of the next turning point, by spatial language ("left," "right," or "straight") or virtual sound (i.e., the perceived azimuth of the sound indicated the target direction). The authors hypothesized that virtual sound, being processed at direct perceptual levels, would have lower load than even simple language commands, which require cognitive mediation. As predicted, whereas the guidance modes did not differ significantly in the no-load condition, participants showed shorter distance traveled and less time to complete a path when performing the N-back task while navigating with virtual sound as guidance. Virtual sound also produced better N-back performance than spatial language. By indicating the superiority of virtual sound for guidance when cognitive load is present, as is characteristic of everyday navigation, these results have implications for guidance systems for the visually impaired and others.
2012-01-01
Background Real-time cardiovascular magnetic resonance (rtCMR) is considered attractive for guiding TAVI. Owing to an unlimited scan plane orientation and an unsurpassed soft-tissue contrast with simultaneous device visualization, rtCMR is presumed to allow safe device navigation and to offer optimal orientation for precise axial positioning. We sought to evaluate the preclinical feasibility of rtCMR-guided transarterial aortic valve implatation (TAVI) using the nitinol-based Medtronic CoreValve bioprosthesis. Methods rtCMR-guided transfemoral (n = 2) and transsubclavian (n = 6) TAVI was performed in 8 swine using the original CoreValve prosthesis and a modified, CMR-compatible delivery catheter without ferromagnetic components. Results rtCMR using TrueFISP sequences provided reliable imaging guidance during TAVI, which was successful in 6 swine. One transfemoral attempt failed due to unsuccessful aortic arch passage and one pericardial tamponade with subsequent death occurred as a result of ventricular perforation by the device tip due to an operating error, this complication being detected without delay by rtCMR. rtCMR allowed for a detailed, simultaneous visualization of the delivery system with the mounted stent-valve and the surrounding anatomy, resulting in improved visualization during navigation through the vasculature, passage of the aortic valve, and during placement and deployment of the stent-valve. Post-interventional success could be confirmed using ECG-triggered time-resolved cine-TrueFISP and flow-sensitive phase-contrast sequences. Intended valve position was confirmed by ex-vivo histology. Conclusions Our study shows that rtCMR-guided TAVI using the commercial CoreValve prosthesis in conjunction with a modified delivery system is feasible in swine, allowing improved procedural guidance including immediate detection of complications and direct functional assessment with reduction of radiation and omission of contrast media. PMID:22453050
Kahlert, Philipp; Parohl, Nina; Albert, Juliane; Schäfer, Lena; Reinhardt, Renate; Kaiser, Gernot M; McDougall, Ian; Decker, Brad; Plicht, Björn; Erbel, Raimund; Eggebrecht, Holger; Ladd, Mark E; Quick, Harald H
2012-03-27
Real-time cardiovascular magnetic resonance (rtCMR) is considered attractive for guiding TAVI. Owing to an unlimited scan plane orientation and an unsurpassed soft-tissue contrast with simultaneous device visualization, rtCMR is presumed to allow safe device navigation and to offer optimal orientation for precise axial positioning. We sought to evaluate the preclinical feasibility of rtCMR-guided transarterial aortic valve implatation (TAVI) using the nitinol-based Medtronic CoreValve bioprosthesis. rtCMR-guided transfemoral (n = 2) and transsubclavian (n = 6) TAVI was performed in 8 swine using the original CoreValve prosthesis and a modified, CMR-compatible delivery catheter without ferromagnetic components. rtCMR using TrueFISP sequences provided reliable imaging guidance during TAVI, which was successful in 6 swine. One transfemoral attempt failed due to unsuccessful aortic arch passage and one pericardial tamponade with subsequent death occurred as a result of ventricular perforation by the device tip due to an operating error, this complication being detected without delay by rtCMR. rtCMR allowed for a detailed, simultaneous visualization of the delivery system with the mounted stent-valve and the surrounding anatomy, resulting in improved visualization during navigation through the vasculature, passage of the aortic valve, and during placement and deployment of the stent-valve. Post-interventional success could be confirmed using ECG-triggered time-resolved cine-TrueFISP and flow-sensitive phase-contrast sequences. Intended valve position was confirmed by ex-vivo histology. Our study shows that rtCMR-guided TAVI using the commercial CoreValve prosthesis in conjunction with a modified delivery system is feasible in swine, allowing improved procedural guidance including immediate detection of complications and direct functional assessment with reduction of radiation and omission of contrast media.
Comparison Between RGB and Rgb-D Cameras for Supporting Low-Cost Gnss Urban Navigation
NASA Astrophysics Data System (ADS)
Rossi, L.; De Gaetani, C. I.; Pagliari, D.; Realini, E.; Reguzzoni, M.; Pinto, L.
2018-05-01
A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation) and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.
Immune systems are not just for making you feel better: they are for controlling autonomous robots
NASA Astrophysics Data System (ADS)
Rosenblum, Mark
2005-05-01
The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.
A Hyperbolic Ontology Visualization Tool for Model Application Programming Interface Documentation
NASA Technical Reports Server (NTRS)
Hyman, Cody
2011-01-01
Spacecraft modeling, a critically important portion in validating planned spacecraft activities, is currently carried out using a time consuming method of mission to mission model implementations and integration. A current project in early development, Integrated Spacecraft Analysis (ISCA), aims to remedy this hindrance by providing reusable architectures and reducing time spent integrating models with planning and sequencing tools. The principle objective of this internship was to develop a user interface for an experimental ontology-based structure visualization of navigation and attitude control system modeling software. To satisfy this, a number of tree and graph visualization tools were researched and a Java based hyperbolic graph viewer was selected for experimental adaptation. Early results show promise in the ability to organize and display large amounts of spacecraft model documentation efficiently and effectively through a web browser. This viewer serves as a conceptual implementation for future development but trials with both ISCA developers and end users should be performed to truly evaluate the effectiveness of continued development of such visualizations.
33 CFR 62.51 - Western Rivers Marking System.
Code of Federal Regulations, 2012 CFR
2012-07-01
....51 Section 62.51 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY AIDS TO NAVIGATION UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.51 Western Rivers Marking System. (a) A variation of the standard U.S. aids to navigation system described above is employed...
33 CFR 62.51 - Western Rivers Marking System.
Code of Federal Regulations, 2013 CFR
2013-07-01
....51 Section 62.51 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY AIDS TO NAVIGATION UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.51 Western Rivers Marking System. (a) A variation of the standard U.S. aids to navigation system described above is employed...
33 CFR 62.51 - Western Rivers Marking System.
Code of Federal Regulations, 2014 CFR
2014-07-01
....51 Section 62.51 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY AIDS TO NAVIGATION UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.51 Western Rivers Marking System. (a) A variation of the standard U.S. aids to navigation system described above is employed...
Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.
Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James
2016-03-21
Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.
Visualization of protein interaction networks: problems and solutions
2013-01-01
Background Visualization concerns the representation of data visually and is an important task in scientific research. Protein-protein interactions (PPI) are discovered using either wet lab techniques, such mass spectrometry, or in silico predictions tools, resulting in large collections of interactions stored in specialized databases. The set of all interactions of an organism forms a protein-protein interaction network (PIN) and is an important tool for studying the behaviour of the cell machinery. Since graphic representation of PINs may highlight important substructures, e.g. protein complexes, visualization is more and more used to study the underlying graph structure of PINs. Although graphs are well known data structures, there are different open problems regarding PINs visualization: the high number of nodes and connections, the heterogeneity of nodes (proteins) and edges (interactions), the possibility to annotate proteins and interactions with biological information extracted by ontologies (e.g. Gene Ontology) that enriches the PINs with semantic information, but complicates their visualization. Methods In these last years many software tools for the visualization of PINs have been developed. Initially thought for visualization only, some of them have been successively enriched with new functions for PPI data management and PIN analysis. The paper analyzes the main software tools for PINs visualization considering four main criteria: (i) technology, i.e. availability/license of the software and supported OS (Operating System) platforms; (ii) interoperability, i.e. ability to import/export networks in various formats, ability to export data in a graphic format, extensibility of the system, e.g. through plug-ins; (iii) visualization, i.e. supported layout and rendering algorithms and availability of parallel implementation; (iv) analysis, i.e. availability of network analysis functions, such as clustering or mining of the graph, and the possibility to interact with external databases. Results Currently, many tools are available and it is not easy for the users choosing one of them. Some tools offer sophisticated 2D and 3D network visualization making available many layout algorithms, others tools are more data-oriented and support integration of interaction data coming from different sources and data annotation. Finally, some specialistic tools are dedicated to the analysis of pathways and cellular processes and are oriented toward systems biology studies, where the dynamic aspects of the processes being studied are central. Conclusion A current trend is the deployment of open, extensible visualization tools (e.g. Cytoscape), that may be incrementally enriched by the interactomics community with novel and more powerful functions for PIN analysis, through the development of plug-ins. On the other hand, another emerging trend regards the efficient and parallel implementation of the visualization engine that may provide high interactivity and near real-time response time, as in NAViGaTOR. From a technological point of view, open-source, free and extensible tools, like Cytoscape, guarantee a long term sustainability due to the largeness of the developers and users communities, and provide a great flexibility since new functions are continuously added by the developer community through new plug-ins, but the emerging parallel, often closed-source tools like NAViGaTOR, can offer near real-time response time also in the analysis of very huge PINs. PMID:23368786
Neuropsychological Components of Imagery Processing, Final Technical Report.
ERIC Educational Resources Information Center
Kosslyn, Stephen M.
High-level visual processes make use of stored information, and are invoked during object identification, navigation, tracking, and visual mental imagery. The work presented in this document has resulted in a theory of the component "processing subsystems" used in high-level vision. This theory was developed by considering…
NASA Astrophysics Data System (ADS)
B. Mondal, Suman; Gao, Shengkui; Zhu, Nan; Sudlow, Gail P.; Liang, Kexian; Som, Avik; Akers, Walter J.; Fields, Ryan C.; Margenthaler, Julie; Liang, Rongguang; Gruev, Viktor; Achilefu, Samuel
2015-07-01
The inability to identify microscopic tumors and assess surgical margins in real-time during oncologic surgery leads to incomplete tumor removal, increases the chances of tumor recurrence, and necessitates costly repeat surgery. To overcome these challenges, we have developed a wearable goggle augmented imaging and navigation system (GAINS) that can provide accurate intraoperative visualization of tumors and sentinel lymph nodes in real-time without disrupting normal surgical workflow. GAINS projects both near-infrared fluorescence from tumors and the natural color images of tissue onto a head-mounted display without latency. Aided by tumor-targeted contrast agents, the system detected tumors in subcutaneous and metastatic mouse models with high accuracy (sensitivity = 100%, specificity = 98% ± 5% standard deviation). Human pilot studies in breast cancer and melanoma patients using a near-infrared dye show that the GAINS detected sentinel lymph nodes with 100% sensitivity. Clinical use of the GAINS to guide tumor resection and sentinel lymph node mapping promises to improve surgical outcomes, reduce rates of repeat surgery, and improve the accuracy of cancer staging.
Comprehension of Navigation Directions
NASA Technical Reports Server (NTRS)
Schneider, Vivian I.; Healy, Alice F.
2000-01-01
In an experiment simulating communication between air traffic controllers and pilots, subjects were given navigation instructions varying in length telling them to move in a space represented by grids on a computer screen. The subjects followed the instructions by clicking on the grids in the locations specified. Half of the subjects read the instructions, and half heard them. Half of the subjects in each modality condition repeated back the instructions before following them,and half did not. Performance was worse for the visual than for the auditory modality on the longer messages. Repetition of the instructions generally depressed performance, especially with the longer messages, which required more output than did the shorter messages, and especially with the visual modality, in which phonological recoding from the visual input to the spoken output was necessary. These results are explained in terms of the degrading effects of output interference on memory for instructions.
Comprehension of Navigation Directions
NASA Technical Reports Server (NTRS)
Healy, Alice F.; Schneider, Vivian I.
2002-01-01
Subjects were shown navigation instructions varying in length directing them to move in a space represented by grids on a computer screen. They followed the instructions by clicking on the grids in the locations specified. Some subjects repeated back the instructions before following them, some did not, and others repeated back the instructions in reduced form, including only the critical words. The commands in each message were presented simultaneously for half of the subjects and sequentially for the others. For the longest messages, performance was better on the initial commands and worse on the final commands with simultaneous than with sequential presentation. Instruction repetition depressed performance, but reduced repetition removed this disadvantage. Effects of presentation format were attributed to visual scanning strategies. The advantage for reduced repetition was attributable either to enhanced visual scanning or to reduced output interference. A follow-up study with auditory presentation supported the visual scanning explanation.
Visual environment recognition for robot path planning using template matched filters
NASA Astrophysics Data System (ADS)
Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto
2017-08-01
A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.
The Effects of Restricted Peripheral Field-of-View on Spatial Learning while Navigating.
Barhorst-Cates, Erica M; Rand, Kristina M; Creem-Regehr, Sarah H
2016-01-01
Recent work with simulated reductions in visual acuity and contrast sensitivity has found decrements in survey spatial learning as well as increased attentional demands when navigating, compared to performance with normal vision. Given these findings, and previous work showing that peripheral field loss has been associated with impaired mobility and spatial memory for room-sized spaces, we investigated the role of peripheral vision during navigation using a large-scale spatial learning paradigm. First, we aimed to establish the magnitude of spatial memory errors at different levels of field restriction. Second, we tested the hypothesis that navigation under these different levels of restriction would use additional attentional resources. Normally sighted participants walked on novel real-world paths wearing goggles that restricted the field-of-view (FOV) to severe (15°, 10°, 4°, or 0°) or mild angles (60°) and then pointed to remembered target locations using a verbal reporting measure. They completed a concurrent auditory reaction time task throughout each path to measure cognitive load. Only the most severe restrictions (4° and blindfolded) showed impairment in pointing error compared to the mild restriction (within-subjects). The 10° and 4° conditions also showed an increase in reaction time on the secondary attention task, suggesting that navigating with these extreme peripheral field restrictions demands the use of limited cognitive resources. This comparison of different levels of field restriction suggests that although peripheral field loss requires the actor to use more attentional resources while navigating starting at a less extreme level (10°), spatial memory is not negatively affected until the restriction is very severe (4°). These results have implications for understanding of the mechanisms underlying spatial learning during navigation and the approaches that may be taken to develop assistance for navigation with visual impairment.
VPython: Writing Real-time 3D Physics Programs
NASA Astrophysics Data System (ADS)
Chabay, Ruth
2001-06-01
VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.
Integrated genome browser: visual analytics platform for genomics.
Freese, Nowlan H; Norris, David C; Loraine, Ann E
2016-07-15
Genome browsers that support fast navigation through vast datasets and provide interactive visual analytics functions can help scientists achieve deeper insight into biological systems. Toward this end, we developed Integrated Genome Browser (IGB), a highly configurable, interactive and fast open source desktop genome browser. Here we describe multiple updates to IGB, including all-new capabilities to display and interact with data from high-throughput sequencing experiments. To demonstrate, we describe example visualizations and analyses of datasets from RNA-Seq, ChIP-Seq and bisulfite sequencing experiments. Understanding results from genome-scale experiments requires viewing the data in the context of reference genome annotations and other related datasets. To facilitate this, we enhanced IGB's ability to consume data from diverse sources, including Galaxy, Distributed Annotation and IGB-specific Quickload servers. To support future visualization needs as new genome-scale assays enter wide use, we transformed the IGB codebase into a modular, extensible platform for developers to create and deploy all-new visualizations of genomic data. IGB is open source and is freely available from http://bioviz.org/igb aloraine@uncc.edu. © The Author 2016. Published by Oxford University Press.
Visualization techniques for computer network defense
NASA Astrophysics Data System (ADS)
Beaver, Justin M.; Steed, Chad A.; Patton, Robert M.; Cui, Xiaohui; Schultz, Matthew
2011-06-01
Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.
Exploring Scientific Information for Policy Making under Deep Uncertainty
NASA Astrophysics Data System (ADS)
Forni, L.; Galaitsi, S.; Mehta, V. K.; Escobar, M.; Purkey, D. R.; Depsky, N. J.; Lima, N. A.
2016-12-01
Each actor evaluating potential management strategies brings her/his own distinct set of objectives to a complex decision space of system uncertainties. The diversity of these objectives require detailed and rigorous analyses that responds to multifaceted challenges. However, the utility of this information depends on the accessibility of scientific information to decision makers. This paper demonstrates data visualization tools for presenting scientific results to decision makers in two case studies, La Paz/ El Alto, Bolivia, and Yuba County,California. Visualization output from the case studies combines spatiotemporal, multivariate and multirun/multiscenario information to produce information corresponding to the objectives defined by key actors and stakeholders. These tools can manage complex data and distill scientific information into accessible formats. Using the visualizations, scientists and decision makers can navigate the decision space and potential objective trade-offs to facilitate discussion and consensus building. These efforts can support identifying stable negotiatedagreements between different stakeholders.
Visual gate for brain-computer interfaces.
Dias, N S; Jacinto, L R; Mendes, P M; Correia, J H
2009-01-01
Brain-Computer Interfaces (BCI) based on event related potentials (ERP) have been successfully developed for applications like virtual spellers and navigation systems. This study tests the use of visual stimuli unbalanced in the subject's field of view to simultaneously cue mental imagery tasks (left vs. right hand movement) and detect subject attention. The responses to unbalanced cues were compared with the responses to balanced cues in terms of classification accuracy. Subject specific ERP spatial filters were calculated for optimal group separation. The unbalanced cues appear to enhance early ERPs related to cue visuospatial processing that improved the classification accuracy (as low as 6%) of ERPs in response to left vs. right cues soon (150-200 ms) after the cue presentation. This work suggests that such visual interface may be of interest in BCI applications as a gate mechanism for attention estimation and validation of control decisions.
Learning and Prediction of Slip from Visual Information
NASA Technical Reports Server (NTRS)
Angelova, Anelia; Matthies, Larry; Helmick, Daniel; Perona, Pietro
2007-01-01
This paper presents an approach for slip prediction from a distance for wheeled ground robots using visual information as input. Large amounts of slippage which can occur on certain surfaces, such as sandy slopes, will negatively affect rover mobility. Therefore, obtaining information about slip before entering such terrain can be very useful for better planning and avoiding these areas. To address this problem, terrain appearance and geometry information about map cells are correlated to the slip measured by the rover while traversing each cell. This relationship is learned from previous experience, so slip can be predicted remotely from visual information only. The proposed method consists of terrain type recognition and nonlinear regression modeling. The method has been implemented and tested offline on several off-road terrains including: soil, sand, gravel, and woodchips. The final slip prediction error is about 20%. The system is intended for improved navigation on steep slopes and rough terrain for Mars rovers.
[Personnel with poor vision at fighter pilot school].
Corbé, C; Menu, J P
1997-10-01
The piloting of fighting aircraft, the navigation of space-shuttle, the piloting of an helicopter in tactical flight at an altitude of 50 metres require the use of all sensorial, ocular, vestibular, proprioceptive ... sensors. So, the selection and the follow-up of these aerial engines' pilots need a very complete study of medical parameters, in particular sensorial and notably visual system. The doctors and the expert researchers in Aeronautical and spatial Medicine of the Army Health Department, which have in charge the medical supervision of flight crew, should study, create, and improve tests of visual sensorial exploration developed from fundamental and applied research. These authenticated tests with military pilots were applied in ophthalmology for the estimation of normal and deficient vision. A proposition to change norms of World Health Organisation applied to the vision has been following these to low visual persons was equally introduced.
Manzanares, Aarón; Menayo, Ruperto; Segado, Francisco; Salmerón, Diego; Cano, Juan Antonio
2015-01-01
The visual behaviour is a determining factor in sailing due to the influence of the environmental conditions. The aim of this research was to determine the visual behaviour pattern in sailors with different practice time in one star race, applying a probabilistic model based on Markov chains. The sample of this study consisted of 20 sailors, distributed in two groups, top ranking (n = 10) and bottom ranking (n = 10), all of them competed in the Optimist Class. An automated system of measurement, which integrates the VSail-Trainer sail simulator and the Eye Tracking System(TM) was used. The variables under consideration were the sequence of fixations and the fixation recurrence time performed on each location by the sailors. The event consisted of one of simulated regatta start, with stable conditions of wind, competitor and sea. Results show that top ranking sailors perform a low recurrence time on relevant locations and higher on irrelevant locations while bottom ranking sailors make a low recurrence time in most of the locations. The visual pattern performed by bottom ranking sailors is focused around two visual pivots, which does not happen in the top ranking sailor's pattern. In conclusion, the Markov chains analysis has allowed knowing the visual behaviour pattern of the top and bottom ranking sailors and its comparison.
A Small Lunar Rover for Reconnaissance in the Framework of ExoGeoLab Project, System Level Design
NASA Astrophysics Data System (ADS)
Noroozi, A.; Ha, L.; van Dalen, P.; Maas, A.; de Raedt, S.; Poulakis, P.; Foing, B. H.
2009-04-01
Scientific research is based on accurate measurement and so depends on the possibilities of accurate instruments. In planetary science and exploration it is often difficult or even impossible in some cases to gather accurate and direct information from a specified target. It is important to gather as much information as possible to be able to analyze and extract scientific data from them. One possibility to do so is to send equipments to the target and perform the measurements locally. The measurement data is then sent to base station for further analysis. To send measurement instruments to measurement point it is important to have a good estimation of the environmental situation there. This information can be collected by sending a pilot rover to the area of interest to collect visual information. The aim of this work is to develop a tele-operated small rover, Google Lunar X-Prize (GLXP) class, which is capable of surviving in the Moon environment and perform reconnaissance to provide visual information to base station of ExoGeoLab project of ESA/ESTEC. Using the state of the art developments in electronics, software and communication technologies allows us to achieve increase in accuracy while reducing size and power consumption. Target mass of the rover is lees than 5 kg and its target dimension is 300 x 60 x 80 mm3. The small size of the rover gives the possibility of accessing places which are normally out of reach. The required power for operation and the cost of launch is considerably reduced compared to large rovers which makes the mission more cost effective. The mission of the rover is to capture high resolution images and transmit them to base station. Data link between lover and base station is wireless and rover should supply its own energy. The base station can be either a habitat or a relay station. The navigation of the rover is controlled by an operator in a habitat who has a view from the stereo camera on the rover. This stereo camera gives image information to the base and gives the possibility for future autonomous navigation by using three-dimensional image recognition software. As the navigation view should have minimum delay, the resolution of stereo camera is not very high. The rover design is divided into four work packages. These work packages are remote imaging, remote manual navigation, locomotion and structure, and power system. Remote imaging work package is responsible for capturing high resolution images, transmitting image data to base station via wireless link and store the data for further processing. Remote manual navigation is handling the tele-operation. It collects stereo images and navigation sensor readouts, transmits stereo images and navigation data to base station via wireless link, displays the image and sensor status in a real-time fashion on operator's monitor, receives command from operator's joystick, transfers navigation commands to rover via wireless link, and operates the actuators accordingly. Locomotion and structure takes care of designing the body structure and locomotion system based on the Moon environment specifications. The target specifications of rover locomotion system are maximum speed of 200 m/h, maximum acceleration of 0.554 m/s2, and maximum slope angle of 20Ë . The power system for the rover includes the solar panel, batteries and power electronics mounted on the rover. The energy storage in the rover should be able to survive for minimum 500 m movement on the moon. Subsequently, it should provide energy for other sub-systems to communicate, navigate and transmit the data. Considering the harsh environmental issues on the Moon such as dust, temperature range and radiation, it is vital for the mission that these issues are considered in the design to correctly dimension reliability and if necessary redundancy. Corrosion resistive material should be used to ensure the survival of mechanical structure, moving parts and other sensitive parts such as electronics. High temperature variation should be considered in the design of structure and electronics and finally electronics should be radiation protected.
Robotics and Virtual Reality for Cultural Heritage Digitization and Fruition
NASA Astrophysics Data System (ADS)
Calisi, D.; Cottefoglie, F.; D'Agostini, L.; Giannone, F.; Nenci, F.; Salonia, P.; Zaratti, M.; Ziparo, V. A.
2017-05-01
In this paper we present our novel approach for acquiring and managing digital models of archaeological sites, and the visualization techniques used to showcase them. In particular, we will demonstrate two technologies: our robotic system for digitization of archaeological sites (DigiRo) result of over three years of efforts by a group of cultural heritage experts, computer scientists and roboticists, and our cloud-based archaeological information system (ARIS). Finally we describe the viewers we developed to inspect and navigate the 3D models: a viewer for the web (ROVINA Web Viewer) and an immersive viewer for Virtual Reality (ROVINA VR Viewer).
New vision based navigation clue for a regular colonoscope's tip
NASA Astrophysics Data System (ADS)
Mekaouar, Anouar; Ben Amar, Chokri; Redarce, Tanneguy
2009-02-01
Regular colonoscopy has always been regarded as a complicated procedure requiring a tremendous amount of skill to be safely performed. In deed, the practitioner needs to contend with both the tortuousness of the colon and the mastering of a colonoscope. So, he has to take the visual data acquired by the scope's tip into account and rely mostly on his common sense and skill to steer it in a fashion promoting a safe insertion of the device's shaft. In that context, we do propose a new navigation clue for the tip of regular colonoscope in order to assist surgeons over a colonoscopic examination. Firstly, we consider a patch of the inner colon depicted in a regular colonoscopy frame. Then we perform a sketchy 3D reconstruction of the corresponding 2D data. Furthermore, a suggested navigation trajectory ensued on the basis of the obtained relief. The visible and invisible lumen cases are considered. Due to its low cost reckoning, such strategy would allow for the intraoperative configuration changes and thus cut back the non-rigidity effect of the colon. Besides, it would have the trend to provide a safe navigation trajectory through the whole colon, since this approach is aiming at keeping the extremity of the instrument as far as possible from the colon wall during navigation. In order to make effective the considered process, we replaced the original manual control system of a regular colonoscope by a motorized one allowing automatic pan and tilt motions of the device's tip.
Wei, Wenhui; Gao, Zhaohui; Gao, Shesheng; Jia, Ke
2018-04-09
In order to meet the requirements of autonomy and reliability for the navigation system, combined with the method of measuring speed by using the spectral redshift information of the natural celestial bodies, a new scheme, consisting of Strapdown Inertial Navigation System (SINS)/Spectral Redshift (SRS)/Geomagnetic Navigation System (GNS), is designed for autonomous integrated navigation systems. The principle of this SINS/SRS/GNS autonomous integrated navigation system is explored, and the corresponding mathematical model is established. Furthermore, a robust adaptive central difference particle filtering algorithm is proposed for this autonomous integrated navigation system. The simulation experiments are conducted and the results show that the designed SINS/SRS/GNS autonomous integrated navigation system possesses good autonomy, strong robustness and high reliability, thus providing a new solution for autonomous navigation technology.
Comparison of helmet-mounted display designs in support of wayfinding
NASA Astrophysics Data System (ADS)
Kumagai, Jason K.; Massel, Lisa; Tack, David; Bossi, Linda
2003-09-01
The Canadian Soldier Information Requirements Technology Demonstration (SIREQ TD) soldier modernization research and development program has conducted experiments to help determine the types and amount of information needed to support wayfinding across a range of terrain environments, the most effective display modality for providing the information (visual, auditory or tactile) that will minimize conflict with other infantry tasks, and to optimize interface design. In this study, seven different visual helmet-mounted display (HMD) designs were developed based on soldier feedback from previous studies. The displays and an in-service compass condition were contrasted to investigate how the visual HMD interfaces influenced navigation performance. Displays varied with respect to their information content, frame of reference, point of view, and display features. Twelve male infantry soldiers used all eight experimental conditions to locate bearings to waypoints. From a constant location, participants were required to face waypoints presented at offset bearings of 25, 65, and 120 degrees. Performance measures included time to identify waypoints, accuracy, and head misdirection errors. Subjective measures of performance included ratings of ease of use, acceptance for land navigation, and mental demand. Comments were collected to identify likes, dislikes and possible improvements required for HMDs. Results underlined the potential performance enhancement of GPS-based navigation with HMDs, the requirement for explicit directional information, the desirability of both analog and digital information, the performance benefits of an egocentric frame of reference, the merit of a forward field of view, and the desirability of a guide to help landmark. Implications for the information requirements and human factors design of HMDs for land-based navigational tasks are discussed.
Plumb, Andrew A; Phillips, Peter; Spence, Graeme; Mallett, Susan; Taylor, Stuart A; Halligan, Steve; Fanshawe, Thomas
2017-08-01
Purpose To investigate the effect of increasing navigation speed on the visual search and decision making during polyp identification for computed tomography (CT) colonography Materials and Methods Institutional review board permission was obtained to use deidentified CT colonography data for this prospective reader study. After obtaining informed consent from the readers, 12 CT colonography fly-through examinations that depicted eight polyps were presented at four different fixed navigation speeds to 23 radiologists. Speeds ranged from 1 cm/sec to 4.5 cm/sec. Gaze position was tracked by using an infrared eye tracker, and readers indicated that they saw a polyp by clicking a mouse. Patterns of searching and decision making by speed were investigated graphically and by multilevel modeling. Results Readers identified polyps correctly in 56 of 77 (72.7%) of viewings at the slowest speed but in only 137 of 225 (60.9%) of viewings at the fastest speed (P = .004). They also identified fewer false-positive features at faster speeds (42 of 115; 36.5%) of videos at slowest speed, 89 of 345 (25.8%) at fastest, P = .02). Gaze location was highly concentrated toward the central quarter of the screen area at faster speeds (mean gaze points at slowest speed vs fastest speed, 86% vs 97%, respectively). Conclusion Faster navigation speed at endoluminal CT colonography led to progressive restriction of visual search patterns. Greater speed also reduced both true-positive and false-positive colorectal polyp identification. © RSNA, 2017 Online supplemental material is available for this article.
NASA Technical Reports Server (NTRS)
Mcgee, L. A.; Smith, G. L.; Hegarty, D. M.; Merrick, R. B.; Carson, T. M.; Schmidt, S. F.
1970-01-01
A preliminary study has been made of the navigation performance which might be achieved for the high cross-range space shuttle orbiter during final approach and landing by using an optimally augmented inertial navigation system. Computed navigation accuracies are presented for an on-board inertial navigation system augmented (by means of an optimal filter algorithm) with data from two different ground navigation aids; a precision ranging system and a microwave scanning beam landing guidance system. These results show that augmentation with either type of ground navigation aid is capable of providing a navigation performance at touchdown which should be adequate for the space shuttle. In addition, adequate navigation performance for space shuttle landing is obtainable from the precision ranging system even with a complete dropout of precision range measurements as much as 100 seconds before touchdown.
2007-02-01
Differences (ANOVA post hoc analyses) (Experiment 1) 45 Appendix D. NASA - TLX Subscale Score Mean Differences (ANOVA post hoc analyses) (Experiment 1) 47... NASA - TLX Subscale Score Mean Differences (ANOVA post hoc analyses) (Experiment 2) 55 Distribution List 56 v List of Figures Figure 1...15 Figure 16. Overall NASA - TLX score versus waypoint display modality................................... 16
Construct and face validity of a virtual reality-based camera navigation curriculum.
Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J
2012-10-01
Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P < 0.05). In the individual modules, coordination required 13.3 attempts for novices, 4.2 for intermediates, and 1.7 for the advanced group (P < 0.05). Target visualization required 19.3 attempts for novices, 13.2 for intermediates, and 8.2 for the advanced group (P < 0.05). Participants believe that training improves camera handling skills (95%), is relevant to surgery (95%), and is a valid training tool (93%). Graphics (98%) and realism (93%) were highly regarded. The VR-based camera navigation curriculum demonstrates construct and face validity for our training population. Camera navigation simulation may be a valuable tool that can be integrated into training protocols for residents and medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.
Overview of research in progress at the Center of Excellence
NASA Technical Reports Server (NTRS)
Wandell, Brian A.
1993-01-01
The Center of Excellence (COE) was created nine years ago to facilitate active collaboration between the scientists at Ames Research Center and the Stanford Psychology Department. Significant interchange of ideas and personnel continues between Stanford and participating groups at NASA-Ames; the COE serves its function well. This progress report is organized into sections divided by project. Each section contains a list of investigators, a background statement, progress report, and a proposal for work during the coming year. The projects are: Algorithms for development and calibration of visual systems, Visually optimized image compression, Evaluation of advanced piloting displays, Spectral representations of color, Perception of motion in man and machine, Automation and decision making, and Motion information used for navigation and control.
Three-Dimensional Tactical Display and Method for Visualizing Data with a Probability of Uncertainty
2009-08-03
replacing the more complex and less intuitive displays presently provided in such contexts as commercial aircraft , marine vehicles, and air traffic...space-virtual reality, 3-D image display system which is enabled by using a unique form of Aerogel as the primary display media. A preferred...and displays a real 3-D image in the Aerogel matrix. [0014] U.S. Patent No. 6,285,317, issued September 4, 2001, to Ong, discloses a navigation
NASA Mars rover: a testbed for evaluating applications of covariance intersection
NASA Astrophysics Data System (ADS)
Uhlmann, Jeffrey K.; Julier, Simon J.; Kamgar-Parsi, Behzad; Lanzagorta, Marco O.; Shyu, Haw-Jye S.
1999-07-01
The Naval Research Laboratory (NRL) has spearheaded the development and application of Covariance Intersection (CI) for a variety of decentralized data fusion problems. Such problems include distributed control, onboard sensor fusion, and dynamic map building and localization. In this paper we describe NRL's development of a CI-based navigation system for the NASA Mars rover that stresses almost all aspects of decentralized data fusion. We also describe how this project relates to NRL's augmented reality, advanced visualization, and REBOT projects.
Design and implementation of a PC-based image-guided surgical system.
Stefansic, James D; Bass, W Andrew; Hartmann, Steven L; Beasley, Ryan A; Sinha, Tuhin K; Cash, David M; Herline, Alan J; Galloway, Robert L
2002-11-01
In interactive, image-guided surgery, current physical space position in the operating room is displayed on various sets of medical images used for surgical navigation. We have developed a PC-based surgical guidance system (ORION) which synchronously displays surgical position on up to four image sets and updates them in real time. There are three essential components which must be developed for this system: (1) accurately tracked instruments; (2) accurate registration techniques to map physical space to image space; and (3) methods to display and update the image sets on a computer monitor. For each of these components, we have developed a set of dynamic link libraries in MS Visual C++ 6.0 supporting various hardware tools and software techniques. Surgical instruments are tracked in physical space using an active optical tracking system. Several of the different registration algorithms were developed with a library of robust math kernel functions, and the accuracy of all registration techniques was thoroughly investigated. Our display was developed using the Win32 API for windows management and tomographic visualization, a frame grabber for live video capture, and OpenGL for visualization of surface renderings. We have begun to use this current implementation of our system for several surgical procedures, including open and minimally invasive liver surgery.
INL Autonomous Navigation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.
Leonhardt, Sara D; Kaluza, Benjamin F; Wallace, Helen; Heard, Tim A
2016-10-01
To date, no study has investigated how landscape structural (visual) alterations affect navigation and thus homing success in stingless bees. We addressed this question in the Australian stingless bee Tetragonula carbonaria by performing marking, release and re-capture experiments in landscapes differing in habitat homogeneity (i.e., the proportion of elongated ground features typically considered prominent visual landmarks). We investigated how landscape affected the proportion of bees and nectar foragers returning to their hives as well as the earliest time bees and foragers returned. Undisturbed landscapes with few landmarks (that are conspicuous to the human eye) and large proportions of vegetation cover (natural forests) were classified visually/structurally homogeneous, and disturbed landscapes with many landmarks and fragmented or no extensive vegetation cover (gardens and plantations) visually/structurally heterogeneous. We found that proportions of successfully returning nectar foragers and earliest times first bees and foragers returned did not differ between landscapes. However, most bees returned in the visually/structurally most (forest) and least (garden) homogeneous landscape, suggesting that they use other than elongated ground features for navigation and that return speed is primarily driven by resource availability in a landscape.
Subtle changes in the landmark panorama disrupt visual navigation in a nocturnal bull ant
2017-01-01
The ability of ants to navigate when the visual landmark information is altered has often been tested by creating large and artificial discrepancies in their visual environment. Here, we had an opportunity to slightly modify the natural visual environment around the nest of the nocturnal bull ant Myrmecia pyriformis. We achieved this by felling three dead trees, two located along the typical route followed by the foragers of that particular nest and one in a direction perpendicular to their foraging direction. An image difference analysis showed that the change in the overall panorama following the removal of these trees was relatively little. We filmed the behaviour of ants close to the nest and tracked their entire paths, both before and after the trees were removed. We found that immediately after the trees were removed, ants walked slower and were less directed. Their foraging success decreased and they looked around more, including turning back to look towards the nest. We document how their behaviour changed over subsequent nights and discuss how the ants may detect and respond to a modified visual environment in the evening twilight period. This article is part of the themed issue ‘Vision in dim light’. PMID:28193813
Visual homing with a pan-tilt based stereo camera
NASA Astrophysics Data System (ADS)
Nirmal, Paramesh; Lyons, Damian M.
2013-01-01
Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.
Local Homing Navigation Based on the Moment Model for Landmark Distribution and Features
Lee, Changmin; Kim, DaeEun
2017-01-01
For local homing navigation, an agent is supposed to return home based on the surrounding environmental information. According to the snapshot model, the home snapshot and the current view are compared to determine the homing direction. In this paper, we propose a novel homing navigation method using the moment model. The suggested moment model also follows the snapshot theory to compare the home snapshot and the current view, but the moment model defines a moment of landmark inertia as the sum of the product of the feature of the landmark particle with the square of its distance. The method thus uses range values of landmarks in the surrounding view and the visual features. The center of the moment can be estimated as the reference point, which is the unique convergence point in the moment potential from any view. The homing vector can easily be extracted from the centers of the moment measured at the current position and the home location. The method effectively guides homing direction in real environments, as well as in the simulation environment. In this paper, we take a holistic approach to use all pixels in the panoramic image as landmarks and use the RGB color intensity for the visual features in the moment model in which a set of three moment functions is encoded to determine the homing vector. We also tested visual homing or the moment model with only visual features, but the suggested moment model with both the visual feature and the landmark distance shows superior performance. We demonstrate homing performance with various methods classified by the status of the feature, the distance and the coordinate alignment. PMID:29149043
Wei, Wenhui; Gao, Zhaohui; Gao, Shesheng; Jia, Ke
2018-01-01
In order to meet the requirements of autonomy and reliability for the navigation system, combined with the method of measuring speed by using the spectral redshift information of the natural celestial bodies, a new scheme, consisting of Strapdown Inertial Navigation System (SINS)/Spectral Redshift (SRS)/Geomagnetic Navigation System (GNS), is designed for autonomous integrated navigation systems. The principle of this SINS/SRS/GNS autonomous integrated navigation system is explored, and the corresponding mathematical model is established. Furthermore, a robust adaptive central difference particle filtering algorithm is proposed for this autonomous integrated navigation system. The simulation experiments are conducted and the results show that the designed SINS/SRS/GNS autonomous integrated navigation system possesses good autonomy, strong robustness and high reliability, thus providing a new solution for autonomous navigation technology. PMID:29642549
Ogourtsova, Tatiana; Archambault, Philippe S; Lamontagne, Anouk
2018-04-23
Unilateral spatial neglect (USN), a highly prevalent and disabling post-stroke impairment, has been shown to affect the recovery of locomotor and navigation skills needed for community mobility. We recently found that USN alters goal-directed locomotion in conditions of different cognitive/perceptual demands. However, sensorimotor post-stroke dysfunction (e.g. decreased walking speed) could have influenced the results. Analogous to a previously used goal-directed locomotor paradigm, a seated, joystick-driven navigation experiment, minimizing locomotor demands, was employed in individuals with and without post-stroke USN (USN+ and USN-, respectively) and healthy controls (HC). Participants (n = 15 per group) performed a seated, joystick-driven navigation and detection time task to targets 7 m away at 0°, ±15°/30° in actual (visually-guided), remembered (memory-guided) and shifting (visually-guided with representational updating component) conditions while immersed in a 3D virtual reality environment. Greater end-point mediolateral errors to left-sided targets (remembered and shifting conditions) and overall lengthier onsets in reorientation strategy (shifting condition) were found for USN+ vs. USN- and vs. HC (p < 0.05). USN+ individuals mostly overshot left targets (- 15°/- 30°). Greater delays in detection time for target locations across the visual spectrum (left, middle and right) were found in USN+ vs. USN- and HC groups (p < 0.05). USN-related attentional-perceptual deficits alter navigation abilities in memory-guided and shifting conditions, independently of post-stroke locomotor deficits. Lateralized and non-lateralized deficits in object detection are found. The employed paradigm could be considered in the design and development of sensitive and functional assessment methods for neglect; thereby addressing the drawbacks of currently used traditional paper-and-pencil tools.
Visual navigation in insects: coupling of egocentric and geocentric information
Wehner; Michel; Antonsen
1996-01-01
Social hymenopterans such as bees and ants are central-place foragers; they regularly depart from and return to fixed positions in their environment. In returning to the starting point of their foraging excursion or to any other point, they could resort to two fundamentally different ways of navigation by using either egocentric or geocentric systems of reference. In the first case, they would rely on information continuously collected en route (path integration, dead reckoning), i.e. integrate all angles steered and all distances covered into a mean home vector. In the second case, they are expected, at least by some authors, to use a map-based system of navigation, i.e. to obtain positional information by virtue of the spatial position they occupy within a larger environmental framework. In bees and ants, path integration employing a skylight compass is the predominant mechanism of navigation, but geocentred landmark-based information is used as well. This information is obtained while the animal is dead-reckoning and, hence, added to the vector course. For example, the image of the horizon skyline surrounding the nest entrance is retinotopically stored while the animal approaches the goal along its vector course. As shown in desert ants (genus Cataglyphis), there is neither interocular nor intraocular transfer of landmark information. Furthermore, this retinotopically fixed, and hence egocentred, neural snapshot is linked to an external (geocentred) system of reference. In this way, geocentred information might more and more complement and potentially even supersede the egocentred information provided by the path-integration system. In competition experiments, however, Cataglyphis never frees itself of its homeward-bound vector - its safety-line, so to speak - by which it is always linked to home. Vector information can also be transferred to a longer-lasting (higher-order) memory. There is no need to invoke the concept of the mental analogue of a topographic map - a metric map - assembled by the insect navigator. The flexible use of vectors, snapshots and landmark-based routes suffices to interpret the insect's behaviour. The cognitive-map approach in particular, and the representational paradigm in general, are discussed.
Li, Liang; Yang, Jian; Chu, Yakui; Wu, Wenbo; Xue, Jin; Liang, Ping; Chen, Lei
2016-01-01
Objective To verify the reliability and clinical feasibility of a self-developed navigation system based on an augmented reality technique for endoscopic sinus and skull base surgery. Materials and Methods In this study we performed a head phantom and cadaver experiment to determine the display effect and accuracy of our navigational system. We compared cadaver head-based simulated operations, the target registration error, operation time, and National Aeronautics and Space Administration Task Load Index scores of our navigation system to conventional navigation systems. Results The navigation system developed in this study has a novel display mode capable of fusing endoscopic images to three-dimensional (3-D) virtual images. In the cadaver head experiment, the target registration error was 1.28 ± 0.45 mm, which met the accepted standards of a navigation system used for nasal endoscopic surgery. Compared with conventional navigation systems, the new system was more effective in terms of operation time and the mental workload of surgeons, which is especially important for less experienced surgeons. Conclusion The self-developed augmented reality navigation system for endoscopic sinus and skull base surgery appears to have advantages that outweigh those of conventional navigation systems. We conclude that this navigational system will provide rhinologists with more intuitive and more detailed imaging information, thus reducing the judgment time and mental workload of surgeons when performing complex sinus and skull base surgeries. Ultimately, this new navigational system has potential to increase the quality of surgeries. In addition, the augmented reality navigational system could be of interest to junior doctors being trained in endoscopic techniques because it could speed up their learning. However, it should be noted that the navigation system serves as an adjunct to a surgeon’s skills and knowledge, not as a substitute. PMID:26757365
Li, Liang; Yang, Jian; Chu, Yakui; Wu, Wenbo; Xue, Jin; Liang, Ping; Chen, Lei
2016-01-01
To verify the reliability and clinical feasibility of a self-developed navigation system based on an augmented reality technique for endoscopic sinus and skull base surgery. In this study we performed a head phantom and cadaver experiment to determine the display effect and accuracy of our navigational system. We compared cadaver head-based simulated operations, the target registration error, operation time, and National Aeronautics and Space Administration Task Load Index scores of our navigation system to conventional navigation systems. The navigation system developed in this study has a novel display mode capable of fusing endoscopic images to three-dimensional (3-D) virtual images. In the cadaver head experiment, the target registration error was 1.28 ± 0.45 mm, which met the accepted standards of a navigation system used for nasal endoscopic surgery. Compared with conventional navigation systems, the new system was more effective in terms of operation time and the mental workload of surgeons, which is especially important for less experienced surgeons. The self-developed augmented reality navigation system for endoscopic sinus and skull base surgery appears to have advantages that outweigh those of conventional navigation systems. We conclude that this navigational system will provide rhinologists with more intuitive and more detailed imaging information, thus reducing the judgment time and mental workload of surgeons when performing complex sinus and skull base surgeries. Ultimately, this new navigational system has potential to increase the quality of surgeries. In addition, the augmented reality navigational system could be of interest to junior doctors being trained in endoscopic techniques because it could speed up their learning. However, it should be noted that the navigation system serves as an adjunct to a surgeon's skills and knowledge, not as a substitute.