How does visual manipulation affect obstacle avoidance strategies used by athletes?
Bijman, M P; Fisher, J J; Vallis, L A
2016-01-01
Research examining our ability to avoid obstacles in our path has stressed the importance of visual input. The aim of this study was to determine if athletes playing varsity-level field sports, who rely on visual input to guide motor behaviour, are more able to guide their foot over obstacles compared to recreational individuals. While wearing kinematic markers, eight varsity athletes and eight age-matched controls (aged 18-25) walked along a walkway and stepped over stationary obstacles (180° motion arc). Visual input was manipulated using PLATO visual goggles three or two steps pre-obstacle crossing and compared to trials where vision was given throughout. A main effect between groups for peak trail toe elevation was shown with greater values generated by the controls for all crossing conditions during full vision trials only. This may be interpreted as athletes not perceiving this obstacle as an increased threat to their postural stability. Collectively, findings suggest the athletic group is able to transfer their abilities to non-specific conditions during full vision trials; however, varsity-level athletes were equally reliant on visual cues for these visually guided stepping tasks as their performance was similar to the controls when vision is removed.
Simple Smartphone-Based Guiding System for Visually Impaired People
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-01-01
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them. PMID:28608811
Simple Smartphone-Based Guiding System for Visually Impaired People.
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-06-13
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.
An assessment of auditory-guided locomotion in an obstacle circumvention task.
Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina
2016-06-01
This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle's location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.
Hatzitaki, V; Voudouris, D; Nikodelis, T; Amiridis, I G
2009-02-01
The study examined the impact of visually guided weight shifting (WS) practice on the postural adjustments evoked by elderly women when avoiding collision with a moving obstacle while standing. Fifty-six healthy elderly women (70.9+/-5.7 years, 87.5+/-9.6 kg) were randomly assigned into one of three groups: a group that completed 12 sessions (25 min, 3s/week) of WS practice in the Anterior/Posterior direction (A/P group, n=20), a group that performed the same practice in the medio/lateral direction (M/L group, n=20) and a control group (n=16). Pre- and post-training, participants were tested in a moving obstacle avoidance task. As a result of practice, postural response onset shifted closer to the time of collision with the obstacle. Side-to-side WS resulted in a reduction of the M/L sway amplitude and an increase of the trunk's velocity during avoidance. It is concluded that visually guided WS practice enhances elderly's ability for on-line visuo-motor processing when avoiding collision eliminating reliance on anticipatory scaling. Specifying the direction of WS seems to be critical for optimizing the transfer of training adaptations.
Horowitz, Seth S; Cheney, Cheryl A; Simmons, James A
2004-01-01
The big brown bat (Eptesicus fuscus) is an aerial-feeding insectivorous species that relies on echolocation to avoid obstacles and to detect flying insects. Spatial perception in the dark using echolocation challenges the vestibular system to function without substantial visual input for orientation. IR thermal video recordings show the complexity of bat flights in the field and suggest a highly dynamic role for the vestibular system in orientation and flight control. To examine this role, we carried out laboratory studies of flight behavior under illuminated and dark conditions in both static and rotating obstacle tests while administering heavy water (D2O) to impair vestibular inputs. Eptesicus carried out complex maneuvers through both fixed arrays of wires and a rotating obstacle array using both vision and echolocation, or when guided by echolocation alone. When treated with D2O in combination with lack of visual cues, bats showed considerable decrements in performance. These data indicate that big brown bats use both vision and echolocation to provide spatial registration for head position information generated by the vestibular system.
Patla, Aftab E; Greig, Michael
In the two experiments discussed in this paper we quantified obstacle avoidance performance characteristics carried out open loop (without vision) but with different initial visual sampling conditions and compared it to the full vision condition. The initial visual sampling conditions included: static vision (SV), vision during forward walking for three steps and stopping (FW), vision during forward walking for three steps and not stopping (FW-NS), and vision during backward walking for three steps and stopping (BW). In experiment 1, we compared performance during SV, FW and BW with full vision condition, while in the second experiment we compared performance during FW and FW-NS conditions. The questions we wanted to address are: Is ecologically valid dynamic visual sampling of the environment superior to static visual sampling for open loop obstacle avoidance task? What are the reasons for failure in performing open loop obstacle avoidance task? The results showed that irrespective of the initial visual sampling condition when open loop control is initiated from a standing posture, the success rate was only approximately 50%. The main reason for the high failure rates was not inappropriate limb elevation, but incorrect foot placement before the obstacle. The second experiment showed that it is not the nature of visual sampling per se that influences success rate, but the fact that the open loop obstacle avoidance task is initiated from a standing posture. The results of these two experiments clearly demonstrate the importance of on-line visual information for adaptive human locomotion.
ERIC Educational Resources Information Center
Jax, Steven A.; Rosenbaum, David A.
2007-01-01
According to a prominent theory of human perception and performance (M. A. Goodale & A. D. Milner, 1992), the dorsal, action-related stream only controls visually guided actions in real time. Such a system would be predicted to show little or no action priming from previous experience. The 3 experiments reported here were designed to determine…
Prism adaptation and generalization during visually guided locomotor tasks.
Alexander, M Scott; Flodin, Brent W G; Marigold, Daniel S
2011-08-01
The ability of individuals to adapt locomotion to constraints associated with the complex environments normally encountered in everyday life is paramount for survival. Here, we tested the ability of 24 healthy young adults to adapt to a rightward prism shift (∼11.3°) while either walking and stepping to targets (i.e., precision stepping task) or stepping over an obstacle (i.e., obstacle avoidance task). We subsequently tested for generalization to the other locomotor task. In the precision stepping task, we determined the lateral end-point error of foot placement from the targets. In the obstacle avoidance task, we determined toe clearance and lateral foot placement distance from the obstacle before and after stepping over the obstacle. We found large, rightward deviations in foot placement on initial exposure to prisms in both tasks. The majority of measures demonstrated adaptation over repeated trials, and adaptation rates were dependent mainly on the task. On removal of the prisms, we observed negative aftereffects for measures of both tasks. Additionally, we found a unilateral symmetric generalization pattern in that the left, but not the right, lower limb indicated generalization across the 2 locomotor tasks. These results indicate that the nervous system is capable of rapidly adapting to a visuomotor mismatch during visually demanding locomotor tasks and that the prism-induced adaptation can, at least partially, generalize across these tasks. The results also support the notion that the nervous system utilizes an internal model for the control of visually guided locomotion.
Development of the navigation system for visually impaired.
Harada, Tetsuya; Kaneko, Yuki; Hirahara, Yoshiaki; Yanashima, Kenji; Magatani, Kazushige
2004-01-01
A white cane is a typical support instrument for the visually impaired. They use a white cane for the detection of obstacles while walking. So, the area where they have a mental map, they can walk using white cane without the help of others. However, they cannot walk independently in the unknown area, even if they use a white cane. Because, a white cane is a detecting device for obstacles and not a navigation device for their correct route. Now, we are developing the navigation system for the visually impaired which uses indoor space. In Japan, sometimes colored guide lines to the destination is used for a normal person. These lines are attached on the floor, we can reach the destination, if we walk along one of these line. In our system, a developed new white cane senses one colored guide line, and make notice to an user by vibration. This system recognizes the line of the color stuck on the floor by the optical sensor attached in the white cane. And in order to guide still more smoothly, infrared beacons (optical beacon), which can perform voice guidance, are also used.
Cheng, Po-Hsun
2016-01-01
Several assistive technologies are available to help visually impaired individuals avoid obstructions while walking. Unfortunately, white canes and medical walkers are unable to detect obstacles on the road or react to encumbrances located above the waist. In this study, I adopted the cyber-physical system approach in the development of a cap-connected device to compensate for gaps in detection associated with conventional aids for the visually impaired. I developed a verisimilar, experimental route involving the participation of seven individuals with visual impairment, including straight sections, left turns, right turns, curves, and suspended objects. My aim was to facilitate the collection of information required for the practical use of the device. My findings demonstrate the feasibility of the proposed guiding device in alerting walkers to the presence of some kinds of obstacles from the small number of subjects. That is, it shows promise for future work and research with the proposed device. My findings provide a valuable reference for the further improvement of these devices as well as the establishment of experiments involving the visually impaired.
A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection
ERIC Educational Resources Information Center
Elder, David M.; Grossberg, Stephen; Mingolla, Ennio
2009-01-01
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3-dimensional virtual reality environment to determine the position of objects on the basis of motion discontinuities and computes heading direction,…
Through the eyes of a bird: modelling visually guided obstacle flight
Lin, Huai-Ti; Ros, Ivo G.; Biewener, Andrew A.
2014-01-01
Various flight navigation strategies for birds have been identified at the large spatial scales of migratory and homing behaviours. However, relatively little is known about close-range obstacle negotiation through cluttered environments. To examine obstacle flight guidance, we tracked pigeons (Columba livia) flying through an artificial forest of vertical poles. Interestingly, pigeons adjusted their flight path only approximately 1.5 m from the forest entry, suggesting a reactive mode of path planning. Combining flight trajectories with obstacle pole positions, we reconstructed the visual experience of the pigeons throughout obstacle flights. Assuming proportional–derivative control with a constant delay, we searched the relevant parameter space of steering gains and visuomotor delays that best explained the observed steering. We found that a pigeon's steering resembles proportional control driven by the error angle between the flight direction and the desired opening, or gap, between obstacles. Using this pigeon steering controller, we simulated obstacle flights and showed that pigeons do not simply steer to the nearest opening in the direction of flight or destination. Pigeons bias their flight direction towards larger visual gaps when making fast steering decisions. The proposed behavioural modelling method converts the obstacle avoidance behaviour into a (piecewise) target-aiming behaviour, which is better defined and understood. This study demonstrates how such an approach decomposes open-loop free-flight behaviours into components that can be independently evaluated. PMID:24812052
Through the eyes of a bird: modelling visually guided obstacle flight.
Lin, Huai-Ti; Ros, Ivo G; Biewener, Andrew A
2014-07-06
Various flight navigation strategies for birds have been identified at the large spatial scales of migratory and homing behaviours. However, relatively little is known about close-range obstacle negotiation through cluttered environments. To examine obstacle flight guidance, we tracked pigeons (Columba livia) flying through an artificial forest of vertical poles. Interestingly, pigeons adjusted their flight path only approximately 1.5 m from the forest entry, suggesting a reactive mode of path planning. Combining flight trajectories with obstacle pole positions, we reconstructed the visual experience of the pigeons throughout obstacle flights. Assuming proportional-derivative control with a constant delay, we searched the relevant parameter space of steering gains and visuomotor delays that best explained the observed steering. We found that a pigeon's steering resembles proportional control driven by the error angle between the flight direction and the desired opening, or gap, between obstacles. Using this pigeon steering controller, we simulated obstacle flights and showed that pigeons do not simply steer to the nearest opening in the direction of flight or destination. Pigeons bias their flight direction towards larger visual gaps when making fast steering decisions. The proposed behavioural modelling method converts the obstacle avoidance behaviour into a (piecewise) target-aiming behaviour, which is better defined and understood. This study demonstrates how such an approach decomposes open-loop free-flight behaviours into components that can be independently evaluated.
Design and Development of a Mobile Sensor Based the Blind Assistance Wayfinding System
NASA Astrophysics Data System (ADS)
Barati, F.; Delavar, M. R.
2015-12-01
The blind and visually impaired people are facing a number of challenges in their daily life. One of the major challenges is finding their way both indoor and outdoor. For this reason, routing and navigation independently, especially in urban areas are important for the blind. Most of the blind undertake route finding and navigation with the help of a guide. In addition, other tools such as a cane, guide dog or electronic aids are used by the blind. However, in some cases these aids are not efficient enough in a wayfinding around obstacles and dangerous areas for the blind. As a result, the need to develop effective methods as decision support using a non-visual media is leading to improve quality of life for the blind through their increased mobility and independence. In this study, we designed and implemented an outdoor mobile sensor-based wayfinding system for the blind. The objectives of this study are to guide the blind for the obstacle recognition and the design and implementation of a wayfinding and navigation mobile sensor system for them. In this study an ultrasonic sensor is used to detect obstacles and GPS is employed for positioning and navigation in the wayfinding. This type of ultrasonic sensor measures the interval between sending waves and receiving the echo signals with respect to the speed of sound in the environment to estimate the distance to the obstacles. In this study the coordinates and characteristics of all the obstacles in the study area are already stored in a GIS database. All of these obstacles were labeled on the map. The ultrasonic sensor designed and constructed in this study has the ability to detect the obstacles in a distance of 2cm to 400cm. The implementation and the results obtained from the interview of a number of blind persons who employed the sensor verified that the designed mobile sensor system for wayfinding was very satisfactory.
Development of the navigation system for the visually impaired by using white cane.
Hirahara, Yoshiaki; Sakurai, Yusuke; Shiidu, Yuriko; Yanashima, Kenji; Magatani, Kazushige
2006-01-01
A white cane is a typical support instrument for the visually impaired. They use a white cane for the detection of obstacles while walking. So, the area where they have a mental map, they can walk using white cane without help of others. However, they cannot walk independently in the unknown area, even if they use a white cane. Because, a white cane is a detecting device for obstacles and not a navigation device for there correcting route. Now, we are developing the navigation system for the visually impaired which uses indoor space. In Japan, sometimes colored guide lines to the destination are used for a normal person. These lines are attached on the floor, we can reach the destination, if we walk along one of these line. In our system, a developed new white cane senses one colored guide line, and makes notice to a user by vibration. This system recognizes the color of the line stuck on the floor by the optical sensor attached in the white cane. And in order to guide still more smoothly, infrared beacons (optical beacon), which can perform voice guidance, are also used.
Visual control of foot placement when walking over complex terrain.
Matthis, Jonathan S; Fajen, Brett R
2014-02-01
The aim of this study was to investigate the role of visual information in the control of walking over complex terrain with irregularly spaced obstacles. We developed an experimental paradigm to measure how far along the future path people need to see in order to maintain forward progress and avoid stepping on obstacles. Participants walked over an array of randomly distributed virtual obstacles that were projected onto the floor by an LCD projector while their movements were tracked by a full-body motion capture system. Walking behavior in a full-vision control condition was compared with behavior in a number of other visibility conditions in which obstacles did not appear until they fell within a window of visibility centered on the moving observer. Collisions with obstacles were more frequent and, for some participants, walking speed was slower when the visibility window constrained vision to less than two step lengths ahead. When window sizes were greater than two step lengths, the frequency of collisions and walking speed were weakly affected or unaffected. We conclude that visual information from at least two step lengths ahead is needed to guide foot placement when walking over complex terrain. When placed in the context of recent research on the biomechanics of walking, the findings suggest that two step lengths of visual information may be needed because it allows walkers to exploit the passive mechanical forces inherent to bipedal locomotion, thereby avoiding obstacles while maximizing energetic efficiency. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Laterality and performance of agility-trained dogs.
Siniscalchi, Marcello; Bertino, Daniele; Quaranta, Angelo
2014-01-01
Correlations between lateralised behaviour and performance were investigated in 19 agility-trained dogs (Canis familiaris) by scoring paw preference to hold a food object and relating it to performance during typical agility obstacles (jump/A-frame and weave poles). In addition, because recent behavioural studies reported that visual stimuli of emotional valence presented to one visual hemifield at a time affect visually guided motor responses in dogs, the possibility that the position of the owner respectively in the left and in the right canine visual hemifield might be associated with quality of performance during agility was considered. Dogs' temperament was also measured by an owner-rated questionnaire. The most relevant finding was that agility-trained dogs displayed longer latencies to complete the obstacles with the owner located in their left visual hemifield compared to the right. Interestingly, the results showed that this phenomenon was significantly linked to both dogs' trainability and the strength of paw preference.
Houdijk, Han; van Ooijen, Mariëlle W; Kraal, Jos J; Wiggerts, Henri O; Polomski, Wojtek; Janssen, Thomas W J; Roerdink, Melvyn
2012-11-01
Gait adaptability, including the ability to avoid obstacles and to take visually guided steps, is essential for safe movement through a cluttered world. This aspect of walking ability is important for regaining independent mobility but is difficult to assess in clinical practice. The objective of this study was to investigate the validity of an instrumented treadmill with obstacles and stepping targets projected on the belt's surface for assessing prosthetic gait adaptability. This was an observational study. A control group of people who were able bodied (n=12) and groups of people with transtibial (n=12) and transfemoral (n=12) amputations participated. Participants walked at a self-selected speed on an instrumented treadmill with projected visual obstacles and stepping targets. Gait adaptability was evaluated in terms of anticipatory and reactive obstacle avoidance performance (for obstacles presented 4 steps and 1 step ahead, respectively) and accuracy of stepping on regular and irregular patterns of stepping targets. In addition, several clinical tests were administered, including timed walking tests and reports of incidence of falls and fear of falling. Obstacle avoidance performance and stepping accuracy were significantly lower in the groups with amputations than in the control group. Anticipatory obstacle avoidance performance was moderately correlated with timed walking test scores. Reactive obstacle avoidance performance and stepping accuracy performance were not related to timed walking tests. Gait adaptability scores did not differ in groups stratified by incidence of falls or fear of falling. Because gait adaptability was affected by walking speed, differences in self-selected walking speed may have diminished differences in gait adaptability between groups. Gait adaptability can be validly assessed by use of an instrumented treadmill with a projected visual context. When walking speed is taken into account, this assessment provides unique, quantitative information about walking ability in people with a lower-limb amputation.
Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF
NASA Astrophysics Data System (ADS)
Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James
A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.
Lukic, Luka; Santos-Victor, José; Billard, Aude
2014-04-01
We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.
Kim, Aram; Kretch, Kari S; Zhou, Zixuan; Finley, James M
2018-05-09
Successful negotiation of obstacles during walking relies on the integration of visual information about the environment with ongoing locomotor commands. When information about the body and environment are removed through occlusion of the lower visual field, individuals increase downward head pitch angle, reduce foot placement precision, and increase safety margins during crossing. However, whether these effects are mediated by loss of visual information about the lower extremities, the obstacle, or both remains to be seen. Here, we used a fully immersive, virtual obstacle negotiation task to investigate how visual information about the lower extremities is integrated with information about the environment to facilitate skillful obstacle negotiation. Participants stepped over virtual obstacles while walking on a treadmill with one of three types of visual feedback about the lower extremities: no feedback, end-point feedback, or a link-segment model. We found that absence of visual information about the lower extremities led to an increase in the variability of leading foot placement after crossing. The presence of a visual representation of the lower extremities promoted greater downward head pitch angle during the approach to and subsequent crossing of an obstacle. In addition, having greater downward head pitch was associated with closer placement of the trailing foot to the obstacle, further placement of the leading foot after the obstacle, and higher trailing foot clearance. These results demonstrate that the fidelity of visual information about the lower extremities influences both feed-forward and feedback aspects of visuomotor coordination during obstacle negotiation.
Mapping multisensory parietal face and body areas in humans.
Huang, Ruey-Song; Chen, Ching-fu; Tran, Alyssa T; Holstein, Katie L; Sereno, Martin I
2012-10-30
Detection and avoidance of impending obstacles is crucial to preventing head and body injuries in daily life. To safely avoid obstacles, locations of objects approaching the body surface are usually detected via the visual system and then used by the motor system to guide defensive movements. Mediating between visual input and motor output, the posterior parietal cortex plays an important role in integrating multisensory information in peripersonal space. We used functional MRI to map parietal areas that see and feel multisensory stimuli near or on the face and body. Tactile experiments using full-body air-puff stimulation suits revealed somatotopic areas of the face and multiple body parts forming a higher-level homunculus in the superior posterior parietal cortex. Visual experiments using wide-field looming stimuli revealed retinotopic maps that overlap with the parietal face and body areas in the postcentral sulcus at the most anterior border of the dorsal visual pathway. Starting at the parietal face area and moving medially and posteriorly into the lower-body areas, the median of visual polar-angle representations in these somatotopic areas gradually shifts from near the horizontal meridian into the lower visual field. These results suggest the parietal face and body areas fuse multisensory information in peripersonal space to guard an individual from head to toe.
Research on robot mobile obstacle avoidance control based on visual information
NASA Astrophysics Data System (ADS)
Jin, Jiang
2018-03-01
Robots to detect obstacles and control robots to avoid obstacles has been a key research topic of robot control. In this paper, a scheme of visual information acquisition is proposed. By judging visual information, the visual information is transformed into the information source of path processing. In accordance with the established route, in the process of encountering obstacles, the algorithm real-time adjustment trajectory to meet the purpose of intelligent control of mobile robots. Simulation results show that, through the integration of visual sensing information, the obstacle information is fully obtained, while the real-time and accuracy of the robot movement control is guaranteed.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P.
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394
Reed-Jones, Rebecca J; Dorgo, Sandor; Hitchings, Maija K; Bader, Julia O
2012-04-01
This study aimed to examine the effect of visual training on obstacle course performance of independent community dwelling older adults. Agility is the ability to rapidly alter ongoing motor patterns, an important aspect of mobility which is required in obstacle avoidance. However, visual information is also a critical factor in successful obstacle avoidance. We compared obstacle course performance of a group that trained in visually driven body movements and agility drills, to a group that trained only in agility drills. We also included a control group that followed the American College of Sports Medicine exercise recommendations for older adults. Significant gains in fitness, mobility and power were observed across all training groups. Obstacle course performance results revealed that visual training had the greatest improvement on obstacle course performance (22%) following a 12 week training program. These results suggest that visual training may be an important consideration for fall prevention programs. Copyright © 2011 Elsevier B.V. All rights reserved.
The use of visual cues for vehicle control and navigation
NASA Technical Reports Server (NTRS)
Hart, Sandra G.; Battiste, Vernol
1991-01-01
At least three levels of control are required to operate most vehicles: (1) inner-loop control to counteract the momentary effects of disturbances on vehicle position; (2) intermittent maneuvers to avoid obstacles, and (3) outer-loop control to maintain a planned route. Operators monitor dynamic optical relationships in their immediate surroundings to estimate momentary changes in forward, lateral, and vertical position, rates of change in speed and direction of motion, and distance from obstacles. The process of searching the external scene to find landmarks (for navigation) is intermittent and deliberate, while monitoring and responding to subtle changes in the visual scene (for vehicle control) is relatively continuous and 'automatic'. However, since operators may perform both tasks simultaneously, the dynamic optical cues available for a vehicle control task may be determined by the operator's direction of gaze for wayfinding. An attempt to relate the visual processes involved in vehicle control and wayfinding is presented. The frames of reference and information used by different operators (e.g., automobile drivers, airline pilots, and helicopter pilots) are reviewed with particular emphasis on the special problems encountered by helicopter pilots flying nap of the earth (NOE). The goal of this overview is to describe the context within which different vehicle control tasks are performed and to suggest ways in which the use of visual cues for geographical orientation might influence visually guided control activities.
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System
Manduchi, R.; Coughlan, J.; Ivanchenko, V.
2016-01-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed. PMID:26949755
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System.
Manduchi, R; Coughlan, J; Ivanchenko, V
2008-07-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed.
Assisting the visually impaired: obstacle detection and warning system by acoustic feedback.
Rodríguez, Alberto; Yebes, J Javier; Alcantarilla, Pablo F; Bergasa, Luis M; Almazán, Javier; Cela, Andrés
2012-12-17
The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system.
Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback
Rodríguez, Alberto; Yebes, J. Javier; Alcantarilla, Pablo F.; Bergasa, Luis M.; Almazán, Javier; Cela, Andrés
2012-01-01
The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system. PMID:23247413
Multi-Section Sensing and Vibrotactile Perception for Walking Guide of Visually Impaired Person.
Jeong, Gu-Young; Yu, Kee-Ho
2016-07-12
Electronic Travel Aids (ETAs) improve the mobility of visually-impaired persons, but it is not easy to develop an ETA satisfying all the factors needed for reliable object detection, effective notification, and actual usability. In this study, the authors developed an easy-to-use ETA having the function of reliable object detection and its successful feedback to the user by tactile stimulation. Seven ultrasonic sensors facing in different directions detect obstacles in the walking path, while vibrators in the tactile display stimulate the hand according to the distribution of obstacles. The detection of ground drop-offs activates the electromagnetic brakes linked to the rear wheels. To verify the feasibility of the developed ETA in the outdoor environment, walking tests by blind participants were performed, and the evaluation of safety to ground drop-offs was carried out. From the experiment, the feasibility of the developed ETA was shown to be sufficient if the sensor ranges for hanging obstacle detection is improved and learning time is provided for the ETA. Finally, the light-weight and low cost ETA designed and assembled based on the evaluation of the developed ETA is introduced to show the improvement of portability and usability, and is compared with the previously developed ETAs.
Multi-Section Sensing and Vibrotactile Perception for Walking Guide of Visually Impaired Person
Jeong, Gu-Young; Yu, Kee-Ho
2016-01-01
Electronic Travel Aids (ETAs) improve the mobility of visually-impaired persons, but it is not easy to develop an ETA satisfying all the factors needed for reliable object detection, effective notification, and actual usability. In this study, the authors developed an easy-to-use ETA having the function of reliable object detection and its successful feedback to the user by tactile stimulation. Seven ultrasonic sensors facing in different directions detect obstacles in the walking path, while vibrators in the tactile display stimulate the hand according to the distribution of obstacles. The detection of ground drop-offs activates the electromagnetic brakes linked to the rear wheels. To verify the feasibility of the developed ETA in the outdoor environment, walking tests by blind participants were performed, and the evaluation of safety to ground drop-offs was carried out. From the experiment, the feasibility of the developed ETA was shown to be sufficient if the sensor ranges for hanging obstacle detection is improved and learning time is provided for the ETA. Finally, the light-weight and low cost ETA designed and assembled based on the evaluation of the developed ETA is introduced to show the improvement of portability and usability, and is compared with the previously developed ETAs. PMID:27420060
Kim, Aram; Zhou, Zixuan; Kretch, Kari S; Finley, James M
2017-07-01
The ability to successfully navigate obstacles in our environment requires integration of visual information about the environment with estimates of our body's state. Previous studies have used partial occlusion of the visual field to explore how information about the body and impending obstacles are integrated to mediate a successful clearance strategy. However, because these manipulations often remove information about both the body and obstacle, it remains to be seen how information about the lower extremities alone is utilized during obstacle crossing. Here, we used an immersive virtual reality (VR) interface to explore how visual feedback of the lower extremities influences obstacle crossing performance. Participants wore a head-mounted display while walking on treadmill and were instructed to step over obstacles in a virtual corridor in four different feedback trials. The trials involved: (1) No visual feedback of the lower extremities, (2) an endpoint-only model, (3) a link-segment model, and (4) a volumetric multi-segment model. We found that the volumetric model improved success rate, placed their trailing foot before crossing and leading foot after crossing more consistently, and placed their leading foot closer to the obstacle after crossing compared to no model. This knowledge is critical for the design of obstacle negotiation tasks in immersive virtual environments as it may provide information about the fidelity necessary to reproduce ecologically valid practice environments.
Assessment of a simple obstacle detection device for the visually impaired.
Lee, Cheng-Lung; Chen, Chih-Yung; Sung, Peng-Cheng; Lu, Shih-Yi
2014-07-01
A simple obstacle detection device, based upon an automobile parking sensor, was assessed as a mobility aid for the visually impaired. A questionnaire survey for mobility needs was performed at the start of this study. After the detector was developed, five blindfolded sighted and 15 visually impaired participants were invited to conduct travel experiments under three test conditions: (1) using a white cane only, (2) using the obstacle detector only and (3) using both devices. A post-experiment interview regarding the usefulness of the obstacle detector for the visually impaired participants was performed. The results showed that the obstacle detector could augment mobility performance with the white cane. The obstacle detection device should be used in conjunction with the white cane to achieve the best mobility speed and body protection. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki
We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.
Improved obstacle avoidance and navigation for an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Giri, Binod; Cho, Hyunsu; Williams, Benjamin C.; Tann, Hokchhay; Shakya, Bicky; Bharam, Vishal; Ahlgren, David J.
2015-01-01
This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 Intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.
Scarfe, Amy C.; Moore, Brian C. J.; Pardhan, Shahina
2017-01-01
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound. PMID:28407000
Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina
2017-01-01
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.
Jieyi Li; Arandjelovic, Ognjen
2017-07-01
Computer science and machine learning in particular are increasingly lauded for their potential to aid medical practice. However, the highly technical nature of the state of the art techniques can be a major obstacle in their usability by health care professionals and thus, their adoption and actual practical benefit. In this paper we describe a software tool which focuses on the visualization of predictions made by a recently developed method which leverages data in the form of large scale electronic records for making diagnostic predictions. Guided by risk predictions, our tool allows the user to explore interactively different diagnostic trajectories, or display cumulative long term prognostics, in an intuitive and easily interpretable manner.
Visually guided gait modifications for stepping over an obstacle: a bio-inspired approach.
Silva, Pedro; Matos, Vitor; Santos, Cristina P
2014-02-01
There is an increasing interest in conceiving robotic systems that are able to move and act in an unstructured and not predefined environment, for which autonomy and adaptability are crucial features. In nature, animals are autonomous biological systems, which often serve as bio-inspiration models, not only for their physical and mechanical properties, but also their control structures that enable adaptability and autonomy-for which learning is (at least) partially responsible. This work proposes a system which seeks to enable a quadruped robot to online learn to detect and to avoid stumbling on an obstacle in its path. The detection relies in a forward internal model that estimates the robot's perceptive information by exploring the locomotion repetitive nature. The system adapts the locomotion in order to place the robot optimally before attempting to step over the obstacle, avoiding any stumbling. Locomotion adaptation is achieved by changing control parameters of a central pattern generator (CPG)-based locomotion controller. The mechanism learns the necessary alterations to the stride length in order to adapt the locomotion by changing the required CPG parameter. Both learning tasks occur online and together define a sensorimotor map, which enables the robot to learn to step over the obstacle in its path. Simulation results show the feasibility of the proposed approach.
An indoor navigation system for the visually impaired.
Guerrero, Luis A; Vasquez, Francisco; Ochoa, Sergio F
2012-01-01
Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.
Detecting Traversable Area and Water Hazards for the Visually Impaired with a pRGB-D Sensor
Yang, Kailun; Wang, Kaiwei; Cheng, Ruiqi; Hu, Weijian; Huang, Xiao; Bai, Jian
2017-01-01
The use of RGB-Depth (RGB-D) sensors for assisting visually impaired people (VIP) has been widely reported as they offer portability, function-diversity and cost-effectiveness. However, polarization cues to assist traversability awareness without precautions against stepping into water areas are weak. In this paper, a polarized RGB-Depth (pRGB-D) framework is proposed to detect traversable area and water hazards simultaneously with polarization-color-depth-attitude information to enhance safety during navigation. The approach has been tested on a pRGB-D dataset, which is built for tuning parameters and evaluating the performance. Moreover, the approach has been integrated into a wearable prototype which generates a stereo sound feedback to guide visually impaired people (VIP) follow the prioritized direction to avoid obstacles and water hazards. Furthermore, a preliminary study with ten blindfolded participants suggests its effectivity and reliability. PMID:28817069
A GPU-accelerated cortical neural network model for visually guided robot navigation.
Beyeler, Michael; Oros, Nicolas; Dutt, Nikil; Krichmar, Jeffrey L
2015-12-01
Humans and other terrestrial animals use vision to traverse novel cluttered environments with apparent ease. On one hand, although much is known about the behavioral dynamics of steering in humans, it remains unclear how relevant perceptual variables might be represented in the brain. On the other hand, although a wealth of data exists about the neural circuitry that is concerned with the perception of self-motion variables such as the current direction of travel, little research has been devoted to investigating how this neural circuitry may relate to active steering control. Here we present a cortical neural network model for visually guided navigation that has been embodied on a physical robot exploring a real-world environment. The model includes a rate based motion energy model for area V1, and a spiking neural network model for cortical area MT. The model generates a cortical representation of optic flow, determines the position of objects based on motion discontinuities, and combines these signals with the representation of a goal location to produce motor commands that successfully steer the robot around obstacles toward the goal. The model produces robot trajectories that closely match human behavioral data. This study demonstrates how neural signals in a model of cortical area MT might provide sufficient motion information to steer a physical robot on human-like paths around obstacles in a real-world environment, and exemplifies the importance of embodiment, as behavior is deeply coupled not only with the underlying model of brain function, but also with the anatomical constraints of the physical body it controls. Copyright © 2015 Elsevier Ltd. All rights reserved.
Novak, Alison C; Deshpande, Nandini
2014-06-01
The ability to safely negotiate obstacles is an important component of independent mobility, requiring adaptive locomotor responses to maintain dynamic balance. This study examined the effects of aging and visual-vestibular interactions on whole-body and segmental control during obstacle crossing. Twelve young and 15 older adults walked along a straight pathway and stepped over one obstacle placed in their path. The task was completed under 4 conditions which included intact or blurred vision, and intact or perturbed vestibular information using galvanic vestibular stimulation (GVS). Global task performance significantly increased under suboptimal vision conditions. Vision also significantly influenced medial-lateral center of mass displacement, irrespective of age and GVS. Older adults demonstrated significantly greater trunk pitch and head roll angles under suboptimal vision conditions. Similar to whole-body control, no GVS effect was found for any measures of segmental control. The results indicate a significant reliance on visual but not vestibular information for locomotor control during obstacle crossing. The lack of differences in GVS effects suggests that vestibular information is not up-regulated for obstacle avoidance. This is not differentially affected by aging. In older adults, insufficient visual input appears to affect ability to minimize anterior-posterior trunk movement despite a slower obstacle crossing time and walking speed. Combined with larger medial-lateral deviation of the body COM with insufficient visual information, the older adults may be at a greater risk for imbalance or inability to recover from a possible trip when stepping over an obstacle. Copyright © 2014 Elsevier B.V. All rights reserved.
Obstacle Characterization in a Geocrowdsourced Accessibility System
NASA Astrophysics Data System (ADS)
Qin, H.; Aburizaiza, A. O.; Rice, R. M.; Paez, F.; Rice, M. T.
2015-08-01
Transitory obstacles - random, short-lived and unpredictable objects - are difficult to capture in any traditional mapping system, yet they have significant negative impacts on the accessibility of mobility- and visually-impaired individuals. These transitory obstacles include sidewalk obstructions, construction detours, and poor surface conditions. To identify these obstacles and assist the navigation of mobility- and visually- impaired individuals, crowdsourced mapping applications have been developed to harvest and analyze the volunteered obstacles reports from local students, faculty, staff, and residents. In this paper, we introduce a training program designed and implemented for recruiting and motivating contributors to participate in our geocrowdsourced accessibility system, and explore the quality of geocrowdsourced data with a comparative analysis methodology.
NASA Astrophysics Data System (ADS)
Zapf, Marc Patrick H.; Boon, Mei-Ying; Lovell, Nigel H.; Suaning, Gregg J.
2016-04-01
Objective. The prospective efficacy of peripheral retinal prostheses for guiding orientation and mobility in the absence of residual vision, as compared to an implant for the central visual field (VF), was evaluated using simulated prosthetic vision (SPV). Approach. Sighted volunteers wearing a head-mounted display performed an obstacle circumvention task under SPV. Mobility and orientation performance with three layouts of prosthetic vision were compared: peripheral prosthetic vision of higher visual acuity (VA) but limited VF, of wider VF but limited VA, as well as centrally restricted prosthetic vision. Learning curves using these layouts were compared fitting an exponential model to the mobility and orientation measures. Main results. Using peripheral layouts, performance was superior to the central layout. Walking speed with both higher-acuity and wider-angle layouts was 5.6% higher, and mobility errors reduced by 46.4% and 48.6%, respectively, as compared to the central layout. The wider-angle layout yielded the least number of collisions, 63% less than the higher-acuity and 73% less than the central layout. Using peripheral layouts, the number of visual-scanning related head movements was 54.3% (higher-acuity) and 60.7% (wider-angle) lower, as compared to the central layout, and the ratio of time standing versus time walking was 51.9% and 61.5% lower, respectively. Learning curves did not differ between layouts, except for time standing versus time walking, where both peripheral layouts achieved significantly lower asymptotic values compared to the central layout. Significance. Beyond complementing residual vision for an improved performance, peripheral prosthetic vision can effectively guide mobility in the later stages of retinitis pigmentosa (RP) without residual vision. Further, the temporal dynamics of learning peripheral and central prosthetic vision are similar. Therefore, development of a peripheral retinal prosthesis and early implantation to alleviate VF constriction in RP should be considered to extend the target group and the time of benefit for potential retinal prosthesis implantees.
Assistive obstacle detection and navigation devices for vision-impaired users.
Ong, S K; Zhang, J; Nee, A Y C
2013-09-01
Quality of life for the visually impaired is an urgent worldwide issue that needs to be addressed. Obstacle detection is one of the most important navigation tasks for the visually impaired. In this research, a novel range sensor placement scheme is proposed in this paper for the development of obstacle detection devices. Based on this scheme, two prototypes have been developed targeting at different user groups. This paper discusses the design issues, functional modules and the evaluation tests carried out for both prototypes. Implications for Rehabilitation Visual impairment problem is becoming more severe due to the worldwide ageing population. Individuals with visual impairment require assistance from assistive devices in daily navigation tasks. Traditional assistive devices that assist navigation may have certain drawbacks, such as the limited sensing range of a white cane. Obstacle detection devices applying the range sensor technology can identify road conditions with a higher sensing range to notify the users of potential dangers in advance.
An Indoor Navigation System for the Visually Impaired
Guerrero, Luis A.; Vasquez, Francisco; Ochoa, Sergio F.
2012-01-01
Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment. PMID:22969398
Assessment of a visually guided autonomous exploration robot
NASA Astrophysics Data System (ADS)
Harris, C.; Evans, R.; Tidey, E.
2008-10-01
A system has been developed to enable a robot vehicle to autonomously explore and map an indoor environment using only visual sensors. The vehicle is equipped with a single camera, whose output is wirelessly transmitted to an off-board standard PC for processing. Visual features within the camera imagery are extracted and tracked, and their 3D positions are calculated using a Structure from Motion algorithm. As the vehicle travels, obstacles in its surroundings are identified and a map of the explored region is generated. This paper discusses suitable criteria for assessing the performance of the system by computer-based simulation and practical experiments with a real vehicle. Performance measures identified include the positional accuracy of the 3D map and the vehicle's location, the efficiency and completeness of the exploration and the system reliability. Selected results are presented and the effect of key system parameters and algorithms on performance is assessed. This work was funded by the Systems Engineering for Autonomous Systems (SEAS) Defence Technology Centre established by the UK Ministry of Defence.
The Effects of Distance and Intervening Obstacles on Visual Inference in Blind and Sighted Children.
ERIC Educational Resources Information Center
Bigelow, Ann E.
1991-01-01
Blind and visually impaired children, and children with normal sight, were asked whether an observer could see a toy from varying distances under conditions in which obstacles did or did not intervene between the toy and the observer. Blind children took longer than other children to master the task. (BC)
Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera
NASA Astrophysics Data System (ADS)
Rahman, Samiur; Ullah, Sana; Ullah, Sehat
2018-01-01
Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.
Range Sensor-Based Efficient Obstacle Avoidance through Selective Decision-Making.
Shim, Youngbo; Kim, Gon-Woo
2018-03-29
In this paper, we address a collision avoidance method for mobile robots. Many conventional obstacle avoidance methods have been focused solely on avoiding obstacles. However, this can cause instability when passing through a narrow passage, and can also generate zig-zag motions. We define two strategies for obstacle avoidance, known as Entry mode and Bypass mode. Entry mode is a pattern for passing through the gap between obstacles, while Bypass mode is a pattern for making a detour around obstacles safely. With these two modes, we propose an efficient obstacle avoidance method based on the Expanded Guide Circle (EGC) method with selective decision-making. The simulation and experiment results show the validity of the proposed method.
[Visually-impaired adolescents' interpersonal relationships at school].
Bezerra, Camilla Pontes; Pagliuca, Lorita Marlena Freitag
2007-09-01
This study describes the school environment and how interpersonal relationships are conducted in view of the needs of visually handicapped adolescents. Data were collected through observations of the physical environment of two schools in Fortaleza, Ceara, Brazil, with the support of a checklist, in order to analyze the existence of obstacles. Four visually handicapped adolescents from 14 to 20 years of age were interviewed. Conclusions were that the obstacles that hamper the free locomotion, communication, and physical and social interaction of the blind--or people with other eye disorders--during their activities at school are numerous.
Evacuation simulation with consideration of obstacle removal and using game theory
NASA Astrophysics Data System (ADS)
Lin, Guan-Wen; Wong, Sai-Keung
2018-06-01
In this paper, we integrate a cellular automaton model with game theory to simulate crowd evacuation from a room with consideration of obstacle removal. The room has one or more exits, one of which is blocked by obstacles. The obstacles at the exit can be removed by volunteers. We investigate the cooperative and defective behaviors of pedestrians during evacuation. The yielder game and volunteer's dilemma game are employed to resolve interpedestrian conflict. An anticipation floor field is proposed to guide the pedestrians to avoid obstacles that are being removed. We conducted experiments to determine how a variety of conditions affect overall crowd evacuation and volunteer evacuation times. The conditions were the start time of obstacle removal, number of obstacles, placement of obstacles, time spent in obstacle removal, strength of the anticipation floor field, and obstacle visibility distance. We demonstrate how reciprocity can be achieved among pedestrians and increases the efficiency of the entire evacuation process.
Bionic Vision-Based Intelligent Power Line Inspection System
Ma, Yunpeng; He, Feijia; Xu, Jinxin
2017-01-01
Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions. PMID:28203269
Obstacle detectors for automated transit vehicles: A technoeconomic and market analysis
NASA Technical Reports Server (NTRS)
Lockerby, C. E.
1979-01-01
A search was conducted to identify the technical and economic characteristics of both NASA and nonNASA obstacle detectors. The findings, along with market information were compiled and analyzed for consideration by DOT and NASA in decisions about any future automated transit vehicle obstacle detector research, development, or applications project. Currently available obstacle detectors and systems under development are identified by type (sonic, capacitance, infrared/optical, guided radar, and probe contact) and compared with the three NASA devices selected as possible improvements or solutions to the problems in existing obstacle detection systems. Cost analyses and market forecasts individually for the AGT and AMTV markets are included.
Prado Vega, Rocío; van Leeuwen, Peter M.; Rendón Vélez, Elizabeth; Lemij, Hans G.; de Winter, Joost C. F.
2013-01-01
The objective of this study was to evaluate differences in driving performance, visual detection performance, and eye-scanning behavior between glaucoma patients and control participants without glaucoma. Glaucoma patients (n = 23) and control participants (n = 12) completed four 5-min driving sessions in a simulator. The participants were instructed to maintain the car in the right lane of a two-lane highway while their speed was automatically maintained at 100 km/h. Additional tasks per session were: Session 1: none, Session 2: verbalization of projected letters, Session 3: avoidance of static obstacles, and Session 4: combined letter verbalization and avoidance of static obstacles. Eye-scanning behavior was recorded with an eye-tracker. Results showed no statistically significant differences between patients and control participants for lane keeping, obstacle avoidance, and eye-scanning behavior. Steering activity, number of missed letters, and letter reaction time were significantly higher for glaucoma patients than for control participants. In conclusion, glaucoma patients were able to avoid objects and maintain a nominal lane keeping performance, but applied more steering input than control participants, and were more likely than control participants to miss peripherally projected stimuli. The eye-tracking results suggest that glaucoma patients did not use extra visual search to compensate for their visual field loss. Limitations of the study, such as small sample size, are discussed. PMID:24146975
The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration
NASA Astrophysics Data System (ADS)
Zhao, Ming; Han, Baoling
2016-11-01
The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.
Retrocausation acting in the single-electron double-slit interference experiment
NASA Astrophysics Data System (ADS)
Hokkyo, Noboru
The single electron double-slit interference experiment is given a time-symmetric interpretation and visualization in terms of the intermediate amplitude of transition between the particle source and the detection point. It is seen that the retarded (causal) amplitude of the electron wave expanding from the source shows an advanced (retrocausal) bifurcation and merging in passing through the double-slit and converges towards the detection point as if guided by the advanced (retrocausal) wave from the detected electron. An experiment is proposed to confirm the causation-retrocausation symmetry of the electron behavior by observing the insensitivity of the interference pattern to non-magnetic obstacles placed in the shadows of the retarded and advanced waves appearing on the rear and front sides of the double-slit.
Technology-Enhanced Learning in Science (TELS)
NASA Astrophysics Data System (ADS)
Linn, Marcia
2006-12-01
The overall research question addressed by the NSF-funded echnologyEnhanced Learning in Science (TELS) Center is whether interactive scientific visualizations embedded in high quality instructional units can be used to increase pre-college student learning in science. The research draws on the knowledge integration framework to guide the design of instructional modules, professional development activities, and assessment activities. This talk reports on results from the first year where 50 teachers taught one of the 12 TELS modules in over 200 classes in 16 diverse schools. Assessments scored with the knowledge integration rubric showed that students made progress in learning complex physics topics such as electricity, mechanics, and thermodynamics. Teachers encountered primarily technological obstacles that the research team was able to address prior to implementation. Powerful scientific visualizations required extensive instructional supports to communicate to students. Currently, TELS is refining the modules, professional development, and assessments based on evidence from the first year. Preliminary design principles intended to help research teams build on the findings will be presented for audience feedback and discussion.
Whitwell, Robert L; Goodale, Melvyn A; Merritt, Kate E; Enns, James T
2018-01-01
The two visual systems hypothesis proposes that human vision is supported by an occipito-temporal network for the conscious visual perception of the world and a fronto-parietal network for visually-guided, object-directed actions. Two specific claims about the fronto-parietal network's role in sensorimotor control have generated much data and controversy: (1) the network relies primarily on the absolute metrics of target objects, which it rapidly transforms into effector-specific frames of reference to guide the fingers, hands, and limbs, and (2) the network is largely unaffected by scene-based information extracted by the occipito-temporal network for those same targets. These two claims lead to the counter-intuitive prediction that in-flight anticipatory configuration of the fingers during object-directed grasping will resist the influence of pictorial illusions. The research confirming this prediction has been criticized for confounding the difference between grasping and explicit estimates of object size with differences in attention, sensory feedback, obstacle avoidance, metric sensitivity, and priming. Here, we address and eliminate each of these confounds. We asked participants to reach out and pick up 3D target bars resting on a picture of the Sander Parallelogram illusion and to make explicit estimates of the length of those bars. Participants performed their grasps without visual feedback, and were permitted to grasp the targets after making their size-estimates to afford them an opportunity to reduce illusory error with haptic feedback. The results show unequivocally that the effect of the illusion is stronger on perceptual judgments than on grasping. Our findings from the normally-sighted population provide strong support for the proposal that human vision is comprised of functionally and anatomically dissociable systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
47 CFR 87.483 - Audio visual warning systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 5 2014-10-01 2014-10-01 false Audio visual warning systems. 87.483 Section 87... AVIATION SERVICES Stations in the Radiodetermination Service § 87.483 Audio visual warning systems. An audio visual warning system (AVWS) is a radar-based obstacle avoidance system. AVWS activates...
Adaptive Gaze Strategies for Locomotion with Constricted Visual Field
Authié, Colas N.; Berthoz, Alain; Sahel, José-Alain; Safran, Avinoam B.
2017-01-01
In retinitis pigmentosa (RP), loss of peripheral visual field accounts for most difficulties encountered in visuo-motor coordination during locomotion. The purpose of this study was to accurately assess the impact of peripheral visual field loss on gaze strategies during locomotion, and identify compensatory mechanisms. Nine RP subjects presenting a central visual field limited to 10–25° in diameter, and nine healthy subjects were asked to walk in one of three directions—straight ahead to a visual target, leftward and rightward through a door frame, with or without obstacle on the way. Whole body kinematics were recorded by motion capture, and gaze direction in space was reconstructed using an eye-tracker. Changes in gaze strategies were identified in RP subjects, including extensive exploration prior to walking, frequent fixations of the ground (even knowing no obstacle was present), of door edges, essentially of the proximal one, of obstacle edge/corner, and alternating door edges fixations when approaching the door. This was associated with more frequent, sometimes larger rapid-eye-movements, larger movements, and forward tilting of the head. Despite the visual handicap, the trajectory geometry was identical between groups, with a small decrease in walking speed in RPs. These findings identify the adaptive changes in sensory-motor coordination, in order to ensure visual awareness of the surrounding, detect changes in spatial configuration, collect information for self-motion, update the postural reference frame, and update egocentric distances to environmental objects. They are of crucial importance for the design of optimized rehabilitation procedures. PMID:28798674
Srinivasan, Mandyam V
2011-04-01
Research over the past century has revealed the impressive capacities of the honeybee, Apis mellifera, in relation to visual perception, flight guidance, navigation, and learning and memory. These observations, coupled with the relative ease with which these creatures can be trained, and the relative simplicity of their nervous systems, have made honeybees an attractive model in which to pursue general principles of sensorimotor function in a variety of contexts, many of which pertain not just to honeybees, but several other animal species, including humans. This review begins by describing the principles of visual guidance that underlie perception of the world in three dimensions, obstacle avoidance, control of flight speed, and orchestrating smooth landings. We then consider how navigation over long distances is accomplished, with particular reference to how bees use information from the celestial compass to determine their flight bearing, and information from the movement of the environment in their eyes to gauge how far they have flown. Finally, we illustrate how some of the principles gleaned from these studies are now being used to design novel, biologically inspired algorithms for the guidance of unmanned aerial vehicles.
Delay and Standard Deviation Beamforming to Enhance Specular Reflections in Ultrasound Imaging.
Bandaru, Raja Sekhar; Sornes, Anders Rasmus; Hermans, Jeroen; Samset, Eigil; D'hooge, Jan
2016-12-01
Although interventional devices, such as needles, guide wires, and catheters, are best visualized by X-ray, real-time volumetric echography could offer an attractive alternative as it avoids ionizing radiation; it provides good soft tissue contrast, and it is mobile and relatively cheap. Unfortunately, as echography is traditionally used to image soft tissue and blood flow, the appearance of interventional devices in conventional ultrasound images remains relatively poor, which is a major obstacle toward ultrasound-guided interventions. The objective of this paper was therefore to enhance the appearance of interventional devices in ultrasound images. Thereto, a modified ultrasound beamforming process using conventional-focused transmit beams is proposed that exploits the properties of received signals containing specular reflections (as arising from these devices). This new beamforming approach referred to as delay and standard deviation beamforming (DASD) was quantitatively tested using simulated as well as experimental data using a linear array transducer. Furthermore, the influence of different imaging settings (i.e., transmit focus, imaging depth, and scan angle) on the obtained image contrast was evaluated. The study showed that the image contrast of specular regions improved by 5-30 dB using DASD beamforming compared with traditional delay and sum (DAS) beamforming. The highest gain in contrast was observed when the interventional device was tilted away from being orthogonal to the transmit beam, which is a major limitation in standard DAS imaging. As such, the proposed beamforming methodology can offer an improved visualization of interventional devices in the ultrasound image with potential implications for ultrasound-guided interventions.
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver’s visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible. PMID:22164117
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver's visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible.
Do characteristics of a stationary obstacle lead to adjustments in obstacle stepping strategies?
Worden, Timothy A; De Jong, Audrey F; Vallis, Lori Ann
2016-01-01
Navigating cluttered and complex environments increases the risk of falling. To decrease this risk, it is important to understand the influence of obstacle visual cues on stepping parameters, however the specific obstacle characteristics that have the greatest influence on avoidance strategies is still under debate. The purpose of the current work is to provide further insight on the relationship between obstacle appearance in the environment and modulation of stepping parameters. Healthy young adults (N=8) first stepped over an obstacle with one visible top edge ("floating"; 8 trials) followed by trials where experimenters randomly altered the location of a ground reference object to one of 7 different positions (8 trials per location), which ranged from 6cm in front of, directly under, or up to 6cm behind the floating obstacle (at 2cm intervals). Mean take-off and landing distance as well as minimum foot clearance values were unchanged across different positions of the ground reference object; a consistent stepping trajectory was observed for all experimental conditions. Contrary to our hypotheses, results of this study indicate that ground based visual cues are not essential for the planning of stepping and clearance strategies. The simultaneous presentation of both floating and ground based objects may have provided critical information that lead to the adoption of a consistent strategy for clearing the top edge of the obstacle. The invariant foot placement observed here may be an appropriate stepping strategy for young adults, however this may not be the case across the lifespan or in special populations. Copyright © 2015 Elsevier B.V. All rights reserved.
The Berlin Wall of Language: The Problem and Solution.
ERIC Educational Resources Information Center
Burroughs, Evelyn
1969-01-01
Several obstacles to social and intellectual growth confront the disadvantaged student whose nonstandard dialect is unacceptable to many users of standard English. To help him develop a bidialectalism that minimizes these obstacles, the English teacher needs to guide the student to explore the ways in which language conveys meaning; to experience…
Your Job Search Organiser. The Essential Guide for a Successful Job Search.
ERIC Educational Resources Information Center
Stevens, Paul
This publication organizes job searches in Australia by creating a paperwork system and recording essential information. It is organized into two parts: career planning and job search management. Part 1 contains the following sections: job evaluation, goal setting, job search obstacles--personal constraints and job search obstacles; and job search…
A Visual Language for World Marketing.
ERIC Educational Resources Information Center
Vanden Bergh, Bruce G.; Sentell, Gerald D.
A practical solution to many of the communication obstacles found in international markets can be found in the development and widespread adoption of a standardized system of international graphic symbols. Any plan to develop and implement a truly acceptable and universal system of graphic symbols will have to overcome many obstacles that past…
Obstacle-avoiding navigation system
Borenstein, Johann; Koren, Yoram; Levine, Simon P.
1991-01-01
A system for guiding an autonomous or semi-autonomous vehicle through a field of operation having obstacles thereon to be avoided employs a memory for containing data which defines an array of grid cells which correspond to respective subfields in the field of operation of the vehicle. Each grid cell in the memory contains a value which is indicative of the likelihood, or probability, that an obstacle is present in the respectively associated subfield. The values in the grid cells are incremented individually in response to each scan of the subfields, and precomputation and use of a look-up table avoids complex trigonometric functions. A further array of grid cells is fixed with respect to the vehicle form a conceptual active window which overlies the incremented grid cells. Thus, when the cells in the active window overly grid cell having values which are indicative of the presence of obstacles, the value therein is used as a multiplier of the precomputed vectorial values. The resulting plurality of vectorial values are summed vectorially in one embodiment of the invention to produce a virtual composite repulsive vector which is then summed vectorially with a target-directed vector for producing a resultant vector for guiding the vehicle. In an alternative embodiment, a plurality of vectors surrounding the vehicle are computed, each having a value corresponding to obstacle density. In such an embodiment, target location information is used to select between alternative directions of travel having low associated obstacle densities.
NASA Astrophysics Data System (ADS)
Maeda, S.; Minami, S.; Okamoto, D.; Obara, T.
2016-09-01
The deflagration-to-detonation transition in a 100 mm square cross-section channel was investigated for a highly reactive stoichiometric hydrogen oxygen mixture at 70 kPa. Obstacles of 5 mm width and 5, 10, and 15 mm heights were equally spaced 60 mm apart at the bottom of the channel. The phenomenon was investigated primarily by time-resolved schlieren visualization from two orthogonal directions using a high-speed video camera. The detonation transition occurred over a remarkably short distance within only three or four repeated obstacles. The global flame speed just before the detonation transition was well below the sound speed of the combustion products and did not reach the sound speed of the initial unreacted gas for tests with an obstacle height of 5 and 10 mm. These results indicate that a detonation transition does not always require global flame acceleration beyond the speed of sound for highly reactive combustible mixtures. A possible mechanism for this detonation initiation was the mixing of the unreacted and reacted gas in the vicinity of the flame front convoluted by the vortex present behind each obstacle, and the formation of a hot spot by the shock wave. The final onset of the detonation originated from the unreacted gas pocket, which was surrounded by the obstacle downstream face and the channel wall.
Ros, Ivo G; Bhagavatula, Partha S; Lin, Huai-Ti; Biewener, Andrew A
2017-02-06
Flying animals must successfully contend with obstacles in their natural environments. Inspired by the robust manoeuvring abilities of flying animals, unmanned aerial systems are being developed and tested to improve flight control through cluttered environments. We previously examined steering strategies that pigeons adopt to fly through an array of vertical obstacles (VOs). Modelling VO flight guidance revealed that pigeons steer towards larger visual gaps when making fast steering decisions. In the present experiments, we recorded three-dimensional flight kinematics of pigeons as they flew through randomized arrays of horizontal obstacles (HOs). We found that pigeons still decelerated upon approach but flew faster through a denser array of HOs compared with the VO array previously tested. Pigeons exhibited limited steering and chose gaps between obstacles most aligned to their immediate flight direction, in contrast to VO navigation that favoured widest gap steering. In addition, pigeons navigated past the HOs with more variable and decreased wing stroke span and adjusted their wing stroke plane to reduce contact with the obstacles. Variability in wing extension, stroke plane and wing stroke path was greater during HO flight. Pigeons also exhibited pronounced head movements when negotiating HOs, which potentially serve a visual function. These head-bobbing-like movements were most pronounced in the horizontal (flight direction) and vertical directions, consistent with engaging motion vision mechanisms for obstacle detection. These results show that pigeons exhibit a keen kinesthetic sense of their body and wings in relation to obstacles. Together with aerodynamic flapping flight mechanics that favours vertical manoeuvring, pigeons are able to navigate HOs using simple rules, with remarkable success.
Ros, Ivo G.; Bhagavatula, Partha S.; Lin, Huai-Ti
2017-01-01
Flying animals must successfully contend with obstacles in their natural environments. Inspired by the robust manoeuvring abilities of flying animals, unmanned aerial systems are being developed and tested to improve flight control through cluttered environments. We previously examined steering strategies that pigeons adopt to fly through an array of vertical obstacles (VOs). Modelling VO flight guidance revealed that pigeons steer towards larger visual gaps when making fast steering decisions. In the present experiments, we recorded three-dimensional flight kinematics of pigeons as they flew through randomized arrays of horizontal obstacles (HOs). We found that pigeons still decelerated upon approach but flew faster through a denser array of HOs compared with the VO array previously tested. Pigeons exhibited limited steering and chose gaps between obstacles most aligned to their immediate flight direction, in contrast to VO navigation that favoured widest gap steering. In addition, pigeons navigated past the HOs with more variable and decreased wing stroke span and adjusted their wing stroke plane to reduce contact with the obstacles. Variability in wing extension, stroke plane and wing stroke path was greater during HO flight. Pigeons also exhibited pronounced head movements when negotiating HOs, which potentially serve a visual function. These head-bobbing-like movements were most pronounced in the horizontal (flight direction) and vertical directions, consistent with engaging motion vision mechanisms for obstacle detection. These results show that pigeons exhibit a keen kinesthetic sense of their body and wings in relation to obstacles. Together with aerodynamic flapping flight mechanics that favours vertical manoeuvring, pigeons are able to navigate HOs using simple rules, with remarkable success. PMID:28163883
Kopiske, Karl K; Bruno, Nicola; Hesse, Constanze; Schenk, Thomas; Franz, Volker H
2016-06-01
It has often been suggested that visual illusions affect perception but not actions such as grasping, as predicted by the "two-visual-systems" hypothesis of Milner and Goodale (1995, The Visual Brain in Action, Oxford University press). However, at least for the Ebbinghaus illusion, relevant studies seem to reveal a consistent illusion effect on grasping (Franz & Gegenfurtner, 2008. Grasping visual illusions: consistent data and no dissociation. Cognitive Neuropsychology). Two interpretations are possible: either grasping is not immune to illusions (arguing against dissociable processing mechanisms for vision-for-perception and vision-for-action), or some other factors modulate grasping in ways that mimic a vision-for perception effect in actions. It has been suggested that one such factor may be obstacle avoidance (Haffenden Schiff & Goodale, 2001. The dissociation between perception and action in the Ebbinghaus illusion: nonillusory effects of pictorial cues on grasp. Current Biology, 11, 177-181). In four different labs (total N = 144), we conducted an exact replication of previous studies suggesting obstacle avoidance mechanisms, implementing conditions that tested grasping as well as multiple perceptual tasks. This replication was supplemented by additional conditions to obtain more conclusive results. Our results confirm that grasping is affected by the Ebbinghaus illusion and demonstrate that this effect cannot be explained by obstacle avoidance. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Buccello-Stout, Regina R.; Cromwell, Ronita L.; Bloomberg, Jacob J.; Weaver, G. D.
2010-01-01
Research indicates a main contributor of injury in older adults is from falling. The decline in sensory systems limits information needed to successfully maneuver through the environment. The objective of this study was to determine if prolonged exposure to the realignment of perceptual-motor systems increases adaptability of balance, and if balance confidence improves after training. A total of 16 older adults between ages 65-85 were randomized to a control group (walking on a treadmill while viewing a static visual scene) and an experimental group (walking on a treadmill while viewing a rotating visual scene). Prior to visual exposure, participants completed six trials of walking through a soft foamed obstacle course. Participants came in twice a week for 4 weeks to complete training of walking on a treadmill and viewing the visual scene for 20 minutes each session. Participants completed the obstacle course after training and four weeks later. Average time, penalty, and Activity Balance Confidence Scale scores were computed for both groups across testing times. The older adults who trained, significantly improved their time through the obstacle course F (2, 28) = 9.41, p < 0.05, as well as reduced their penalty scores F (2, 28) = 21.03, p < 0.05, compared to those who did not train. There was no difference in balance confidence scores between groups across testing times F (2, 28) = 0.503, p > 0.05. Although the training group improved mobility through the obstacle course, there were no differences between the groups in balance confidence.
Birds achieve high robustness in uneven terrain through active control of landing conditions.
Birn-Jeffery, Aleksandra V; Daley, Monica A
2012-06-15
We understand little about how animals adjust locomotor behaviour to negotiate uneven terrain. The mechanical demands and constraints of such behaviours likely differ from uniform terrain locomotion. Here we investigated how common pheasants negotiate visible obstacles with heights from 10 to 50% of leg length. Our goal was to determine the neuro-mechanical strategies used to achieve robust stability, and address whether strategies vary with obstacle height. We found that control of landing conditions was crucial for minimising fluctuations in stance leg loading and work in uneven terrain. Variation in touchdown leg angle (θ(TD)) was correlated with the orientation of ground force during stance, and the angle between the leg and body velocity vector at touchdown (β(TD)) was correlated with net limb work. Pheasants actively targeted obstacles to control body velocity and leg posture at touchdown to achieve nearly steady dynamics on the obstacle step. In the approach step to an obstacle, the birds produced net positive limb work to launch themselves upward. On the obstacle, body dynamics were similar to uniform terrain. Pheasants also increased swing leg retraction velocity during obstacle negotiation, which we suggest is an active strategy to minimise fluctuations in peak force and leg posture in uneven terrain. Thus, pheasants appear to achieve robustly stable locomotion through a combination of path planning using visual feedback and active adjustment of leg swing dynamics to control landing conditions. We suggest that strategies for robust stability are context specific, depending on the quality of sensory feedback available, especially visual input.
Automated Guided Vehicle For Phsically Handicapped People - A Cost Effective Approach
NASA Astrophysics Data System (ADS)
Kumar, G. Arun, Dr.; Sivasubramaniam, Mr. A.
2017-12-01
Automated Guided vehicle (AGV) is like a robot that can deliver the materials from the supply area to the technician automatically. This is faster and more efficient. The robot can be accessed wirelessly. A technician can directly control the robot to deliver the components rather than control it via a human operator (over phone, computer etc. who has to program the robot or ask a delivery person to make the delivery). The vehicle is automatically guided through its ways. To avoid collisions a proximity sensor is attached to the system. The sensor senses the signals of the obstacles and can stop the vehicle in the presence of obstacles. Thus vehicle can avoid accidents that can be very useful to the present industrial trend and material handling and equipment handling will be automated and easy time saving methodology.
Ravens, Corvus corax, follow gaze direction of humans around obstacles.
Bugnyar, Thomas; Stöwe, Mareike; Heinrich, Bernd
2004-01-01
The ability to follow gaze (i.e. head and eye direction) has recently been shown for social mammals, particularly primates. In most studies, individuals could use gaze direction as a behavioural cue without understanding that the view of others may be different from their own. Here, we show that hand-raised ravens not only visually co-orient with the look-ups of a human experimenter but also reposition themselves to follow the experimenter's gaze around a visual barrier. Birds were capable of visual co-orientation already as fledglings but consistently tracked gaze direction behind obstacles not before six months of age. These results raise the possibility that sub-adult and adult ravens can project a line of sight for the other person into the distance. To what extent ravens may attribute mental significance to the visual behaviour of others is discussed. PMID:15306330
McFadyen, Bradford J; Cantin, Jean-François; Swaine, Bonnie; Duchesneau, Guylaine; Doyon, Julien; Dumas, Denyse; Fait, Philippe
2009-09-01
To study the effects of sensory modality of simultaneous tasks during walking with and without obstacles after moderate to severe traumatic brain injury (TBI). Group comparison study. Gait analysis laboratory within a postacute rehabilitation facility. Volunteer sample (N=18). Persons with moderate to severe TBI (n=11) (9 men, 3 women; age, 37.56+/-13.79 y) and a comparison group (n=7) of subjects without neurologic problems matched on average for body mass index and age (4 men, 3 women; age, 39.19+/-17.35 y). Not applicable. Magnitudes and variability for walking speeds, foot clearance margins (ratio of foot clearance distance to obstacle height), and response reaction times (both direct and as a relative cost because of obstacle avoidance). The TBI group had well-recovered walking speeds and a general ability to avoid obstacles. However, these subjects did show lower trail limb toe clearances (P=.003) across all conditions. Response reaction times to the Stroop tasks were longer in general for the TBI group (P=.017), and this group showed significant increases in response reaction times for the visual modality within the more challenging obstacle avoidance task that was not observed for control subjects. A measure of multitask costs related to differences in response reaction times between obstructed and unobstructed trials also only showed increased attention costs for the visual over the auditory stimuli for the TBI group (P=.002). Mobility is a complex construct, and the present results provide preliminary findings that, even after good locomotor recovery, subjects with moderate to severe TBI show residual locomotor deficits in multitasking. Furthermore, our results suggest that sensory modality is important, and greater multitask costs occur during sensory competition (ie, visual interference).
Kim, Hoyeon; Cheang, U. Kei
2017-01-01
In order to broaden the use of microrobots in practical fields, autonomous control algorithms such as obstacle avoidance must be further developed. However, most previous studies of microrobots used manual motion control to navigate past tight spaces and obstacles while very few studies demonstrated the use of autonomous motion. In this paper, we demonstrated a dynamic obstacle avoidance algorithm for bacteria-powered microrobots (BPMs) using electric field in fluidic environments. A BPM consists of an artificial body, which is made of SU-8, and a high dense layer of harnessed bacteria. BPMs can be controlled using externally applied electric fields due to the electrokinetic property of bacteria. For developing dynamic obstacle avoidance for BPMs, a kinematic model of BPMs was utilized to prevent collision and a finite element model was used to characteristic the deformation of an electric field near the obstacle walls. In order to avoid fast moving obstacles, we modified our previously static obstacle avoidance approach using a modified vector field histogram (VFH) method. To validate the advanced algorithm in experiments, magnetically controlled moving obstacles were used to intercept the BPMs as the BPMs move from the initial position to final position. The algorithm was able to successfully guide the BPMs to reach their respective goal positions while avoiding the dynamic obstacles. PMID:29020016
Kim, Hoyeon; Cheang, U Kei; Kim, Min Jun
2017-01-01
In order to broaden the use of microrobots in practical fields, autonomous control algorithms such as obstacle avoidance must be further developed. However, most previous studies of microrobots used manual motion control to navigate past tight spaces and obstacles while very few studies demonstrated the use of autonomous motion. In this paper, we demonstrated a dynamic obstacle avoidance algorithm for bacteria-powered microrobots (BPMs) using electric field in fluidic environments. A BPM consists of an artificial body, which is made of SU-8, and a high dense layer of harnessed bacteria. BPMs can be controlled using externally applied electric fields due to the electrokinetic property of bacteria. For developing dynamic obstacle avoidance for BPMs, a kinematic model of BPMs was utilized to prevent collision and a finite element model was used to characteristic the deformation of an electric field near the obstacle walls. In order to avoid fast moving obstacles, we modified our previously static obstacle avoidance approach using a modified vector field histogram (VFH) method. To validate the advanced algorithm in experiments, magnetically controlled moving obstacles were used to intercept the BPMs as the BPMs move from the initial position to final position. The algorithm was able to successfully guide the BPMs to reach their respective goal positions while avoiding the dynamic obstacles.
ERIC Educational Resources Information Center
Widder, Mirela; Gorsky, Paul
2013-01-01
In schools, learning spatial geometry is usually dependent upon a student's ability to visualize three dimensional geometric configurations from two dimensional drawings. Such a process, however, often creates visual obstacles which are unique to spatial geometry. Useful software programs which realistically depict three dimensional geometric…
Fast obstacle detection based on multi-sensor information fusion
NASA Astrophysics Data System (ADS)
Lu, Linli; Ying, Jie
2014-11-01
Obstacle detection is one of the key problems in areas such as driving assistance and mobile robot navigation, which cannot meet the actual demand by using a single sensor. A method is proposed to realize the real-time access to the information of the obstacle in front of the robot and calculating the real size of the obstacle area according to the mechanism of the triangle similarity in process of imaging by fusing datum from a camera and an ultrasonic sensor, which supports the local path planning decision. In the part of image analyzing, the obstacle detection region is limited according to complementary principle. We chose ultrasonic detection range as the region for obstacle detection when the obstacle is relatively near the robot, and the travelling road area in front of the robot is the region for a relatively-long-distance detection. The obstacle detection algorithm is adapted from a powerful background subtraction algorithm ViBe: Visual Background Extractor. We extracted an obstacle free region in front of the robot in the initial frame, this region provided a reference sample set of gray scale value for obstacle detection. Experiments of detecting different obstacles at different distances respectively, give the accuracy of the obstacle detection and the error percentage between the calculated size and the actual size of the detected obstacle. Experimental results show that the detection scheme can effectively detect obstacles in front of the robot and provide size of the obstacle with relatively high dimensional accuracy.
Expanding the Detection of Traversable Area with RealSense for the Visually Impaired
Yang, Kailun; Wang, Kaiwei; Hu, Weijian; Bai, Jian
2016-01-01
The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers. PMID:27879634
A Depth-Based Head-Mounted Visual Display to Aid Navigation in Partially Sighted Individuals
Hicks, Stephen L.; Wilson, Iain; Muhammed, Louwai; Worsfold, John; Downes, Susan M.; Kennard, Christopher
2013-01-01
Independent navigation for blind individuals can be extremely difficult due to the inability to recognise and avoid obstacles. Assistive techniques such as white canes, guide dogs, and sensory substitution provide a degree of situational awareness by relying on touch or hearing but as yet there are no techniques that attempt to make use of any residual vision that the individual is likely to retain. Residual vision can restricted to the awareness of the orientation of a light source, and hence any information presented on a wearable display would have to limited and unambiguous. For improved situational awareness, i.e. for the detection of obstacles, displaying the size and position of nearby objects, rather than including finer surface details may be sufficient. To test whether a depth-based display could be used to navigate a small obstacle course, we built a real-time head-mounted display with a depth camera and software to detect the distance to nearby objects. Distance was represented as brightness on a low-resolution display positioned close to the eyes without the benefit focussing optics. A set of sighted participants were monitored as they learned to use this display to navigate the course. All were able to do so, and time and velocity rapidly improved with practise with no increase in the number of collisions. In a second experiment a cohort of severely sight-impaired individuals of varying aetiologies performed a search task using a similar low-resolution head-mounted display. The majority of participants were able to use the display to respond to objects in their central and peripheral fields at a similar rate to sighted controls. We conclude that the skill to use a depth-based display for obstacle avoidance can be rapidly acquired and the simplified nature of the display may appropriate for the development of an aid for sight-impaired individuals. PMID:23844067
Animating Wall-Bounded Turbulent Smoke via Filament-Mesh Particle-Particle Method.
Liao, Xiangyun; Si, Weixin; Yuan, Zhiyong; Sun, Hanqiu; Qin, Jing; Wang, Qiong; Heng, Pheng-Ann; Xiangyun Liao; Weixin Si; Zhiyong Yuan; Hanqiu Sun; Jing Qin; Qiong Wang; Pheng-Ann Heng
2018-03-01
Turbulent vortices in smoke flows are crucial for a visually interesting appearance. Unfortunately, it is challenging to efficiently simulate these appealing effects in the framework of vortex filament methods. The vortex filaments in grids scheme allows to efficiently generate turbulent smoke with macroscopic vortical structures, but suffers from the projection-related dissipation, and thus the small-scale vortical structures under grid resolution are hard to capture. In addition, this scheme cannot be applied in wall-bounded turbulent smoke simulation, which requires efficiently handling smoke-obstacle interaction and creating vorticity at the obstacle boundary. To tackle above issues, we propose an effective filament-mesh particle-particle (FMPP) method for fast wall-bounded turbulent smoke simulation with ample details. The Filament-Mesh component approximates the smooth long-range interactions by splatting vortex filaments on grid, solving the Poisson problem with a fast solver, and then interpolating back to smoke particles. The Particle-Particle component introduces smoothed particle hydrodynamics (SPH) turbulence model for particles in the same grid, where interactions between particles cannot be properly captured under grid resolution. Then, we sample the surface of obstacles with boundary particles, allowing the interaction between smoke and obstacle being treated as pressure forces in SPH. Besides, the vortex formation region is defined at the back of obstacles, providing smoke particles flowing by the separation particles with a vorticity force to simulate the subsequent vortex shedding phenomenon. The proposed approach can synthesize the lost small-scale vortical structures and also achieve the smoke-obstacle interaction with vortex shedding at obstacle boundaries in a lightweight manner. The experimental results demonstrate that our FMPP method can achieve more appealing visual effects than vortex filaments in grids scheme by efficiently simulating more vivid thin turbulent features.
Hierarchical Shared Control of Cane-Type Walking-Aid Robot
Tao, Chunjing
2017-01-01
A hierarchical shared-control method of the walking-aid robot for both human motion intention recognition and the obstacle emergency-avoidance method based on artificial potential field (APF) is proposed in this paper. The human motion intention is obtained from the interaction force measurements of the sensory system composed of 4 force-sensing registers (FSR) and a torque sensor. Meanwhile, a laser-range finder (LRF) forward is applied to detect the obstacles and try to guide the operator based on the repulsion force calculated by artificial potential field. An obstacle emergency-avoidance method which comprises different control strategies is also assumed according to the different states of obstacles or emergency cases. To ensure the user's safety, the hierarchical shared-control method combines the intention recognition method with the obstacle emergency-avoidance method based on the distance between the walking-aid robot and the obstacles. At last, experiments validate the effectiveness of the proposed hierarchical shared-control method. PMID:29093805
Imagination Unlimited: A Guide for Creative Problem Solving, Upper Elementary Summer School.
ERIC Educational Resources Information Center
Cleveland Public Schools, OH. Div. of Major Work Classes.
The guide gives procedures for helping gifted upper elementary school students in Major Work classes utilize their imagination. Appropriate literary quotes introduce a discussion on creativity, which involves the imaginative recombination of known ideas into something new. Considered are obstacles that work against creativity such as mental…
Safe Local Navigation for Visually Impaired Users With a Time-of-Flight and Haptic Feedback Device.
Katzschmann, Robert K; Araki, Brandon; Rus, Daniela
2018-03-01
This paper presents ALVU (Array of Lidars and Vibrotactile Units), a contactless, intuitive, hands-free, and discreet wearable device that allows visually impaired users to detect low- and high-hanging obstacles, as well as physical boundaries in their immediate environment. The solution allows for safe local navigation in both confined and open spaces by enabling the user to distinguish free space from obstacles. The device presented is composed of two parts: a sensor belt and a haptic strap. The sensor belt is an array of time-of-flight distance sensors worn around the front of a user's waist, and the pulses of infrared light provide reliable and accurate measurements of the distances between the user and surrounding obstacles or surfaces. The haptic strap communicates the measured distances through an array of vibratory motors worn around the user's upper abdomen, providing haptic feedback. The linear vibration motors are combined with a point-loaded pretensioned applicator to transmit isolated vibrations to the user. We validated the device's capability in an extensive user study entailing 162 trials with 12 blind users. Users wearing the device successfully walked through hallways, avoided obstacles, and detected staircases.
Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor
NASA Astrophysics Data System (ADS)
Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso
2018-04-01
Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.
Creating a Visualization Powerwall
NASA Technical Reports Server (NTRS)
Miller, B. H.; Lambert, J.; Zamora, K.
1996-01-01
From Introduction: This paper presents the issues of constructing a Visualization Powerwall. For each hardware component, the requirements, options an our solution are presented. This is followed by a short description of each pilot project. In the summary, current obstacles and options discovered along the way are presented.
Imaging behind opaque obstacle: a potential method for guided in vitro needle placement
Perinchery, Sandeep Menon; Shinde, Anant; Matham, Murukeshan Vadakke
2016-01-01
We report a simple real time optical imaging concept using an axicon lens to image the object kept behind opaque obstacles in free space. The proposed concept underlines the importance and advantages of using an axicon lens compared to a conventional lens to image behind the obstacle. The potential of this imaging concept is demonstrated by imaging the insertion of surgical needle in biological specimen in real time, without blocking the field of view. It is envisaged that this proposed concepts and methodology can make a telling impact in a wide variety of areas especially for diagnostics, therapeutics and microscopy applications. PMID:28018744
Integrated Collision Avoidance System for Air Vehicle
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2013-01-01
Collision with ground/water/terrain and midair obstacles is one of the common causes of severe aircraft accidents. The various data from the coremicro AHRS/INS/GPS Integration Unit, terrain data base, and object detection sensors are processed to produce collision warning audio/visual messages and collision detection and avoidance of terrain and obstacles through generation of guidance commands in a closed-loop system. The vision sensors provide more information for the Integrated System, such as, terrain recognition and ranging of terrain and obstacles, which plays an important role to the improvement of the Integrated Collision Avoidance System.
Food experiences and eating patterns of visually impaired and blind people.
Bilyk, Marie Claire; Sontrop, Jessica M; Chapman, Gwen E; Barr, Susan I; Mamer, Linda
2009-01-01
The number of visually impaired and blind Canadians will rise dramatically as our population ages, and yet little is known about the impact of blindness on the experience of food and eating. In this qualitative study, the food experiences and eating patterns of visually impaired and blind people were examined. Influencing factors were also explored. In 2000, nine blind or severely visually impaired subjects were recruited through blindness-related organizations in British Columbia. Participants completed individual semi-structured, in-depth interviews. These were transcribed verbatim, coded, and analyzed to explicate participants' experiences. Participants experienced blindness-related obstacles when shopping for food, preparing food, and eating in restaurants. Inaccessible materials and environments left participants with a diet lacking in variety and limited access to physical activity. Seven participants were overweight or obese, a finding that may be related to limited physical activity and higher-than-average restaurant use. This is the first study in which the experience of food and eating is described from the perspective of visually impaired Canadians. Nutrition and blindness professionals must work together to reduce the food-related obstacles faced by visually impaired and blind people. Professionals must address both individual skill development and social and structural inequities.
Lee, Moses; Guyton, Gregory P; Zahoor, Talal; Schon, Lew C
2016-01-01
As a standard open approach, the lateral oblique incision has been widely used for calcaneal displacement osteotomy. However, just as with other orthopedic procedures that use an open approach, complications, including wound healing problems and neurovascular injury in the heel, have been reported. To help avoid these limitations, a percutaneous technique using a Shannon burr for calcaneal displacement osteotomy was introduced. However, relying on a free-hand technique without direct visualization at the osteotomy site has been a major obstacle for this technique. To address this problem, we developed a technical tip using a reference Kirschner wire. A reference Kirschner wire technique provides a reliable and accurate guide for minimally invasive calcaneal displacement osteotomy. Also, the technique should be easy to learn for surgeons new to the procedure. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Teleautonomous guidance for mobile robots
NASA Technical Reports Server (NTRS)
Borenstein, J.; Koren, Y.
1990-01-01
Teleautonomous guidance (TG), a technique for the remote guidance of fast mobile robots, has been developed and implemented. With TG, the mobile robot follows the general direction prescribed by an operator. However, if the robot encounters an obstacle, it autonomously avoids collision with that obstacle while trying to match the prescribed direction as closely as possible. This type of shared control is completely transparent and transfers control between teleoperation and autonomous obstacle avoidance gradually. TG allows the operator to steer vehicles and robots at high speeds and in cluttered environments, even without visual contact. TG is based on the virtual force field (VFF) method, which was developed earlier for autonomous obstacle avoidance. The VFF method is especially suited to the accommodation of inaccurate sensor data (such as that produced by ultrasonic sensors) and sensor fusion, and allows the mobile robot to travel quickly without stopping for obstacles.
A wearable multipoint ultrasonic travel aids for visually impaired
NASA Astrophysics Data System (ADS)
Ercoli, Ilaria; Marchionni, Paolo; Scalise, Lorenzo
2013-09-01
In 2010, the World Health Organization estimates that there were about 285 million people in the world with disabling eyesight loss (246 millions are visually impaired (VI) and 39 millions are totally blind). For such users, hits during mobility tasks are the reason of major concerns and can reduce the quality of their life. The white cane is the primary device used by the majority of blind or VI users to explore and possibly avoid obstacles; it can monitor only the ground (< 1m) and it does not provide protection for the legs, the trunk and the head. In this paper, authors propose a novel stand-alone Electronic Travel Aid (ETA) device for obstacle detection based on multi- sensing (by 4 ultrasonic transducers) and a microcontroller. Portability, simplicity, reduced dimensions and cost are among the major pros of the reported system, which can detect and localize (angular position and distance from the user) obstacles eventually present in the volume in front of him and on the ground in front of him.
Poon, Cynthia; Chin-Cottongim, Lisa G.; Coombes, Stephen A.; Corcos, Daniel M.
2012-01-01
It is well established that the prefrontal cortex is involved during memory-guided tasks whereas visually guided tasks are controlled in part by a frontal-parietal network. However, the nature of the transition from visually guided to memory-guided force control is not as well established. As such, this study examines the spatiotemporal pattern of brain activity that occurs during the transition from visually guided to memory-guided force control. We measured 128-channel scalp electroencephalography (EEG) in healthy individuals while they performed a grip force task. After visual feedback was removed, the first significant change in event-related activity occurred in the left central region by 300 ms, followed by changes in prefrontal cortex by 400 ms. Low-resolution electromagnetic tomography (LORETA) was used to localize the strongest activity to the left ventral premotor cortex and ventral prefrontal cortex. A second experiment altered visual feedback gain but did not require memory. In contrast to memory-guided force control, altering visual feedback gain did not lead to early changes in the left central and midline prefrontal regions. Decreasing the spatial amplitude of visual feedback did lead to changes in the midline central region by 300 ms, followed by changes in occipital activity by 400 ms. The findings show that subjects rely on sensorimotor memory processes involving left ventral premotor cortex and ventral prefrontal cortex after the immediate transition from visually guided to memory-guided force control. PMID:22696535
Optic flow cues guide flight in birds.
Bhagavatula, Partha S; Claudianos, Charles; Ibbotson, Michael R; Srinivasan, Mandyam V
2011-11-08
Although considerable effort has been devoted to investigating how birds migrate over large distances, surprisingly little is known about how they tackle so successfully the moment-to-moment challenges of rapid flight through cluttered environments [1]. It has been suggested that birds detect and avoid obstacles [2] and control landing maneuvers [3-5] by using cues derived from the image motion that is generated in the eyes during flight. Here we investigate the ability of budgerigars to fly through narrow passages in a collision-free manner, by filming their trajectories during flight in a corridor where the walls are decorated with various visual patterns. The results demonstrate, unequivocally and for the first time, that birds negotiate narrow gaps safely by balancing the speeds of image motion that are experienced by the two eyes and that the speed of flight is regulated by monitoring the speed of image motion that is experienced by the two eyes. These findings have close parallels with those previously reported for flying insects [6-13], suggesting that some principles of visual guidance may be shared by all diurnal, flying animals. Copyright © 2011 Elsevier Ltd. All rights reserved.
Extending Our Vision: Access to Inclusive Dance Education for People with Visual Impairment
ERIC Educational Resources Information Center
Seham, Jenny; Yeo, Anna J.
2015-01-01
Environmental, organizational and attitudinal obstacles continue to prevent people with vision loss from meaningfully engaging in dance education and performance. This article addresses the societal disabilities that handicap access to dance education for the blind. Although much of traditional dance instruction relies upon visual cuing and…
ERIC Educational Resources Information Center
Jazzar, Michael; Hamm, Carl
2007-01-01
Following an illustrious introduction to the Montagnards and their plight and flight to the United States, this study explores the education, assimilation, and future development of Montagnard students into American schools. A guide for school leaders is presented within this study to assist the Montagnard students in overcoming obstacles and…
Identification of Vibrotactile Patterns Encoding Obstacle Distance Information.
Kim, Yeongmi; Harders, Matthias; Gassert, Roger
2015-01-01
Delivering distance information of nearby obstacles from sensors embedded in a white cane-in addition to the intrinsic mechanical feedback from the cane-can aid the visually impaired in ambulating independently. Haptics is a common modality for conveying such information to cane users, typically in the form of vibrotactile signals. In this context, we investigated the effect of tactile rendering methods, tactile feedback configurations and directions of tactile flow on the identification of obstacle distance. Three tactile rendering methods with temporal variation only, spatio-temporal variation and spatial/temporal/intensity variation were investigated for two vibration feedback configurations. Results showed a significant interaction between tactile rendering method and feedback configuration. Spatio-temporal variation generally resulted in high correct identification rates for both feedback configurations. In the case of the four-finger vibration, tactile rendering with spatial/temporal/intensity variation also resulted in high distance identification rate. Further, participants expressed their preference for the four-finger vibration over the single-finger vibration in a survey. Both preferred rendering methods with spatio-temporal variation and spatial/temporal/intensity variation for the four-finger vibration could convey obstacle distance information with low workload. Overall, the presented findings provide valuable insights and guidance for the design of haptic displays for electronic travel aids for the visually impaired.
Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras
Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong
2014-01-01
Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679
NASA Astrophysics Data System (ADS)
Gerdts, Stephen; Chambers, Jessica; Ahmed, Kareem
2016-11-01
A detonation engine's fundamental design concept focuses on enhancing the Deflagration to Detonation Transition (DDT), the process through which subsonic flames accelerate to form a spontaneous detonation wave. Flame acceleration is driven by turbulent interactions that expand the reaction zone and induce mixing of products and reactants. Turbulence in a duct can be generated using solid obstructions, fluidic obstacles, duct angle changes, and wall skin friction. Solid obstacles have been previously explored and offer repeatable turbulence induction at the cost of pressure losses and additional system weight. Fluidic jet obstacles are a novel technique that provide advantages such as the ability to be throttled, allowing for active control of combustion modes. The scope of the present work is to expand the experimental database of varying parameters such as main flow and jet equivalence ratios, fluidic momentum ratios, and solid obstacle blockage ratios. Schlieren flow visualization and particle image velocimetry (PIV) are employed to investigate turbulent flame dynamics throughout the interaction. Optimum conditions that lead to flame acceleration for both solid and fluidic obstacles will be determined. American Chemical Society.
Hardy's stargazers and the astronomy of other minds.
Henchman, A
2008-01-01
This essay argues that Thomas Hardy compares the act of observing another person to the scientific practice of observing the stars in order to reveal structural obstacles to accessing other minds. He draws on astronomy and optics to underscore the discrepancy between the full perception one has of one's own consciousness and the lack of such sensory evidence for the consciousness of others. His scenes of stargazing show such obstacles being temporarily overcome; the stargazer turns away from the thick sensory detail of earthly life and uses minimal visual information as a jumping-off point for the imagination. These visual journeys into space are analogous to those Hardy's readers experience as he wrests them out of their bodies into imaginary landscapes and unfamiliar minds.
Beurskens, Rainer; Bock, Otmar
2013-12-01
Previous literature suggests that age-related deficits of dual-task walking are particularly pronounced with second tasks that require continuous visual processing. Here we evaluate whether the difficulty of the walking task matters as well. To this end, participants were asked to walk along a straight pathway of 20m length in four different walking conditions: (a) wide path and preferred pace; (b) narrow path and preferred pace, (c) wide path and fast pace, (d) obstacled wide path and preferred pace. Each condition was performed concurrently with a task requiring visual processing or fine motor control, and all tasks were also performed alone which allowed us to calculate the dual-task costs (DTC). Results showed that the age-related increase of DTC is substantially larger with the visually demanding than with the motor-demanding task, more so when walking on a narrow or obstacled path. We attribute these observations to the fact that visual scanning of the environment becomes more crucial when walking in difficult terrains: the higher visual demand of those conditions accentuates the age-related deficits in coordinating them with a visual non-walking task. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Badman, Jacqueline; Lewis-Spicer, Lisa
This activity guide showcases the work of 19 women artists who have made outstanding contributions to the world of art in spite of the incredible obstacles society placed upon them. Artists, alphabetically presented, include: Sofonisba Anguissola, Rosa Bonheur, Deborah Butterfield, Rosalba Carriera, Janet Fish, Helen Frankenthaler, Giovanna…
Salomir, Rares; Petrusca, Lorena; Auboiroux, Vincent; Muller, Arnaud; Vargas, Maria-Isabel; Morel, Denis R; Goget, Thomas; Breguet, Romain; Terraz, Sylvain; Hopple, Jerry; Montet, Xavier; Becker, Christoph D; Viallon, Magalie
2013-06-01
The treatment of liver cancer is a major public health issue because the liver is a frequent site for both primary and secondary tumors. Rib heating represents a major obstacle for the application of extracorporeal focused ultrasound to liver ablation. Magnetic resonance (MR)-guided external shielding of acoustic obstacles (eg, the ribs) was investigated here to avoid unwanted prefocal energy deposition in the pathway of the focused ultrasound beam. Ex vivo and in vivo (7 female sheep) experiments were performed in this study. Magnetic resonance-guided high-intensity focused ultrasound (MRgHIFU) was performed using a randomized 256-element phased-array transducer (f∼1 MHz) and a 3-T whole-body clinical MR scanner. A physical mask was inserted in the prefocal beam pathway, external to the body, to block the energy normally targeted on the ribs. The effectiveness of the reflecting material was investigated by characterizing the efficacy of high-intensity focused ultrasound beam reflection and scattering on its surface using Schlieren interferometry. Before high-intensity focused ultrasound sonication, the alignment of the protectors with the conical projections of the ribs was required and achieved in multiple steps using the embedded graphical tools of the MR scanner. Multiplanar near real-time MR thermometry (proton resonance frequency shift method) enabled the simultaneous visualization of the local temperature increase at the focal point and around the exposed ribs. The beam defocusing due to the shielding was evaluated from the MR acoustic radiation force impulse imaging data. Both MR thermometry (performed with hard absorber positioned behind a full-aperture blocking shield) and Schlieren interferometry indicated a very good energy barrier of the shielding material. The specific temperature contrast between rib surface (spatial average) and focus, calculated at the end point of the MRgHIFU sonication, with protectors vs no protectors, indicated an important reduction of the temperature elevation at the ribs' surface, typically by 3.3 ± 0.4 in vivo. This was translated into an exponential reduction in thermal dose by several orders of magnitude. The external shielding covering the full conical shadow of the ribs was more effective when the protectors could be placed close to the ribs' surface and had a tendency to lose its efficiency when placed further from the ribs. Hepatic parenchyma was safely ablated in vivo using this rib-sparing strategy and single-focus independent sonications. A readily available, MR-compatible, effective, and cost-competitive method for rib protection in transcostal MRgHIFU was validated in this study, using specific reflective strips. The current approach permitted safe intercostal ablation of small volumes (0.7 mL) of liver parenchyma.
Iwashita, Takuji; Nakai, Yousuke; Lee, John G; Park, Do Hyun; Muthusamy, V Raman; Chang, Kenneth J
2012-02-01
Multiple diagnostic and therapeutic endoscopic ultrasound (EUS) procedures have been widely performed using a standard oblique-viewing (OV) curvilinear array (CLA) echoendoscope. Recently, a new, forward-viewing (FV) CLA was developed, with the advantages of improved endoscopic viewing and manipulation of devices. However, the FV-CLA echoendoscope has a narrower ultrasound scanning field, and lacks an elevator, which might represent obstacles for clinical use. The aim of this study was to compare the FV-CLA echoendoscope to the OV-CLA echoendoscope for EUS imaging of abdominal organs, and to assess the feasibility of EUS-guided interventions using the FV-CLA echoendoscope. EUS examinations were first performed and recorded using the OV-CLA echoendoscope, followed immediately by the FV-CLA echoendoscope. Video recordings were then assessed by two independent endosonographers in a blinded fashion. The EUS visualization and image quality of specific abdominal organs/structures were scored. Any indicated fine-needle aspiration (FNA) or intervention was performed using the FV-CLA echoendoscope, with the OV-CLA echoendoscope as salvage upon failure. A total of 21 patients were examined in the study. Both echoendoscopes had similar visualization and image quality for all organs/structures, except the common hepatic duct (CHD), which was seen significantly better with the FV-CLA echoendoscope. EUS interventions were conducted in eight patients, including FNA of pancreatic mass (3), pancreatic cyst (3), and cystgastrostomy (2). The FV-CLA echoendoscope was successful in seven patients. One failed FNA of the pancreatic head cyst was salvaged using the OV-CLA echoendoscope. There were no differences between the FV-CLA echoendoscope and the OV-CLA echoendoscope in visualization or image quality on upper EUS, except for the superior image quality of CHD using the FV-CLA echoendoscope. Therefore, the disadvantages of the FV-CLA echoendoscope appear minimal in light of the potential advantages. © 2011 Journal of Gastroenterology and Hepatology Foundation and Blackwell Publishing Asia Pty Ltd.
Visual guidance of mobile platforms
NASA Astrophysics Data System (ADS)
Blissett, Rodney J.
1993-12-01
Two systems are described and results presented demonstrating aspects of real-time visual guidance of autonomous mobile platforms. The first approach incorporates prior knowledge in the form of rigid geometrical models linking visual references within the environment. The second approach is based on a continuous synthesis of information extracted from image tokens to generate a coarse-grained world model, from which potential obstacles are inferred. The use of these techniques in workplace applications is discussed.
Transient bow shock around a cylinder in a supersonic dusty plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, John K.; Merlino, Robert L.
2013-07-15
Visual observations of the formation of a bow shock in the transient supersonic flow of a dusty plasma incident on a biased cylinder are presented. The bow shock formed when the advancing front of a streaming dust cloud was reflected by the obstacle. After its formation, the density jump of the bow shock increased as it moved upstream of the obstacle. A physical picture for the formation of the electrohydrodynamic bow shock is discussed.
NASA Astrophysics Data System (ADS)
Zheng, Li; Yi, Ruan
2009-11-01
Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.
NASA Astrophysics Data System (ADS)
Kortenkamp, David; Huber, Marcus J.; Congdon, Clare B.; Huffman, Scott B.; Bidlack, Clint R.; Cohen, Charles J.; Koss, Frank V.; Raschke, Ulrich; Weymouth, Terry E.
1993-05-01
This paper describes the design and implementation of an integrated system for combining obstacle avoidance, path planning, landmark detection and position triangulation. Such an integrated system allows the robot to move from place to place in an environment, avoiding obstacles and planning its way out of traps, while maintaining its position and orientation using distinctive landmarks. The task the robot performs is to search a 22 m X 22 m arena for 10 distinctive objects, visiting each object in turn. This same task was recently performed by a dozen different robots at a competition in which the robot described in this paper finished first.
Obstacle avoidance system with sonar sensing and fuzzy logic
NASA Astrophysics Data System (ADS)
Chiang, Wen-chuan; Kelkar, Nikhal; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of an obstacle avoidance system using sonar sensors for a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. The obstacle avoidance system is based on a micro-controller interfaced with multiple ultrasonic transducers. This micro-controller independently handles all timing and distance calculations and sends a distance measurement back to the computer via the serial line. This design yields a portable independent system. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous obstacle avoidance controller applicable for any mobile vehicle with only minor adaptations.
Smart walking stick for blind people: an application of 3D printer
NASA Astrophysics Data System (ADS)
Ikbal, Md. Allama; Rahman, Faidur; Ali, Md. Ripon; Kabir, M. Hasnat; Furukawa, Hidemitsu
2017-04-01
A prototype of the smart walking stick has been designed and characterized for the people who are visually impaired. In this study, it was considered that the proposed system will alert visuallyimpaired people over the obstacles which are in front of blind people as well as the obstacles of the street such as a manhole, when the blind people are walking in the street. The proposed system was designed in two stages, i.e. hardware and software which makes the system as a complete prototype. Three ultrasonic sonar sensors were used to detect in front obstacle and street surface obstacle such as manhole. Basically the sensor transmits an electromagnetic wave which travels toward the obstacle and back to the sensor receiver. The distance between the sensor and the obstacle is calculated from the received signal. The calculated distance value is compared with the pre-defined value and determines whether the obstacle is present or not. The 3D CAD software was used to design the sensor holder. An Up-Mini 3D printer was used to print the sensor holders which were mounted on the walking stick. Therefore, the sensors were fixed in the right position. Another sensor was used for the detecting the water on the walking street. The performance for detecting the obstacles and water indicate the merit of smart walking stick.
Shade determination using camouflaged visual shade guides and an electronic spectrophotometer.
Kvalheim, S F; Øilo, M
2014-03-01
The aim of the present study was to compare a camouflaged visual shade guide to a spectrophotometer designed for restorative dentistry. Two operators performed analyses of 66 subjects. One central upper incisor was measured four times by each operator; twice with a camouflaged visual shade guide and twice with a spectrophotometer Both methods had acceptable repeatability rates, but the electronic shade determination showed higher repeatability. In general, the electronically determined shades were darker than the visually determined shades. The use of a camouflaged visual shade guide seems to be an adequate method to reduce operator bias.
Parameswaran, Vidhya; Anilkumar, S; Lylajam, S; Rajesh, C; Narayan, Vivek
2016-01-01
This in vitro study compared the shade matching abilities of an intraoral spectrophotometer and the conventional visual method using two shade guides. The results of previous investigations between color perceived by human observers and color assessed by instruments have been inconclusive. The objectives were to determine accuracies and interrater agreement of both methods and effectiveness of two shade guides with either method. In the visual method, 10 examiners with normal color vision matched target control shade tabs taken from the two shade guides (VITAPAN Classical™ and VITAPAN 3D Master™) with other full sets of the respective shade guides. Each tab was matched 3 times to determine repeatability of visual examiners. The spectrophotometric shade matching was performed by two independent examiners using an intraoral spectrophotometer (VITA Easyshade™) with five repetitions for each tab. Results revealed that visual method had greater accuracy than the spectrophotometer. The spectrophotometer; however, exhibited significantly better interrater agreement as compared to the visual method. While VITAPAN Classical shade guide was more accurate with the spectrophotometer, VITAPAN 3D Master shade guide proved better with visual method. This in vitro study clearly delineates the advantages and limitations of both methods. There were significant differences between the methods with the visual method producing more accurate results than the spectrophotometric method. The spectrophotometer showed far better interrater agreement scores irrespective of the shade guide used. Even though visual shade matching is subjective, it is not inferior and should not be underrated. Judicious combination of both techniques is imperative to attain a successful and esthetic outcome.
Extended Wearing Trial of Trifield Lens Device for “Tunnel Vision”
Woods, Russell L.; Giorgi, Robert G.; Berson, Eliot L.; Peli, Eli
2009-01-01
Severe visual field constriction (tunnel vision) impairs the ability to navigate and walk safely. We evaluated Trifield glasses as a mobility rehabilitation device for tunnel vision in an extended wearing trial. Twelve patients with tunnel vision (5 to 22 degrees wide) due to retinitis pigmentosa or choroideremia participated in the 5-visit wearing trial. To expand the horizontal visual field, one spectacle lens was fitted with two apex-to-apex prisms that vertically bisected the pupil on primary gaze. This provides visual field expansion at the expense of visual confusion (two objects with the same visual direction). Patients were asked to wear these spectacles as much as possible for the duration of the wearing trial (median 8, range 6 to 60, weeks). Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived visual ability were measured. Of 12 patients, 9 chose to continue wearing the Trifield glasses at the end of the wearing trial. Of those 9 patients, at long-term follow-up (35 to 78 weeks), 3 reported still wearing the Trifield glasses. Visual field expansion (median 18, range 9 to 38, degrees) was demonstrated for all patients. No patient demonstrated adaptation to the change in visual direction produced by the Trifield glasses (prisms). For difficulty with obstacles, some differences between successful and non-successful wearers were found. Trifield glasses provided reported benefits in obstacle avoidance to 7 of the 12 patients completing the wearing trial. Crowded environments were particularly difficult for most wearers. Possible reasons for long-term discontinuation and lack of adaptation to perceived direction are discussed. PMID:20444130
Extended wearing trial of Trifield lens device for 'tunnel vision'.
Woods, Russell L; Giorgi, Robert G; Berson, Eliot L; Peli, Eli
2010-05-01
Severe visual field constriction (tunnel vision) impairs the ability to navigate and walk safely. We evaluated Trifield glasses as a mobility rehabilitation device for tunnel vision in an extended wearing trial. Twelve patients with tunnel vision (5-22 degrees wide) due to retinitis pigmentosa or choroideremia participated in the 5-visit wearing trial. To expand the horizontal visual field, one spectacle lens was fitted with two apex-to-apex prisms that vertically bisected the pupil on primary gaze. This provides visual field expansion at the expense of visual confusion (two objects with the same visual direction). Patients were asked to wear these spectacles as much as possible for the duration of the wearing trial (median 8, range 6-60 weeks). Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived visual ability were measured. Of 12 patients, nine chose to continue wearing the Trifield glasses at the end of the wearing trial. Of those nine patients, at long-term follow-up (35-78 weeks), three reported still wearing the Trifield glasses. Visual field expansion (median 18, range 9-38 degrees) was demonstrated for all patients. No patient demonstrated adaptation to the change in visual direction produced by the Trifield glasses (prisms). For reported difficulty with obstacles, some differences between successful and non-successful wearers were found. Trifield glasses provided reported benefits in obstacle avoidance to 7 of the 12 patients completing the wearing trial. Crowded environments were particularly difficult for most wearers. Possible reasons for long-term discontinuation and lack of adaptation to perceived direction are discussed.
Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight
NASA Technical Reports Server (NTRS)
Suorsa, Raymond; Sridhar, Banavar
1991-01-01
A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.
Verspui, Remko; Gray, John R
2009-10-01
Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.
Lustig, Avichai; Ketter-Katz, Hadas; Katzir, Gadi
2013-11-01
Chameleons (Chamaeleonidae, reptilia), in common with most ectotherms, show full optic nerve decussation and sparse inter-hemispheric commissures. Chameleons are unique in their capacity for highly independent, large-amplitude eye movements. We address the question: Do common chameleons, Chamaeleo chameleon, during detour, show patterns of lateralization of motion and of eye use that differ from those shown by other ectotherms? To reach a target (prey) in passing an obstacle in a Y-maze, chameleons were required to make a left or a right detour. We analyzed the direction of detours and eye use and found that: (i) individuals differed in their preferred detour direction, (ii) eye use was lateralized at the group level, with significantly longer durations of viewing the target with the right eye, compared with the left eye, (iii) during left side, but not during right side, detours the durations of viewing the target with the right eye were significantly longer than the durations with the left eye. Thus, despite the uniqueness of chameleons' visual system, they display patterns of lateralization of motion and of eye use, typical of other ectotherms. These findings are discussed in relation to hemispheric functions. Copyright © 2013 Elsevier B.V. All rights reserved.
Worden, Timothy A; Mendes, Matthew; Singh, Pratham; Vallis, Lori Ann
2016-10-01
Successful planning and execution of motor strategies while concurrently performing a cognitive task has been previously examined, but unfortunately the varied and numerous cognitive tasks studied has limited our fundamental understanding of how the central nervous system successfully integrates and executes these tasks simultaneously. To gain a better understanding of these mechanisms we used a set of cognitive tasks requiring similar central executive function processes and response outputs but requiring different perceptual mechanisms to perform the motor task. Thirteen healthy young adults (20.6±1.6years old) were instrumented with kinematic markers (60Hz) and completed 5 practice, 10 single-task obstacle walking trials and two 40 trial experimental blocks. Each block contained 20 trials of seated (single-task) trials followed by 20 cognitive and obstacle (30% lower leg length) crossing trials (dual-task). Blocks were randomly presented and included either an auditory Stroop task (AST; central interference only) or a visual Stroop task (VST; combined central and structural interference). Higher accuracy rates and shorter response times were observed for the VST versus AST single-task trials (p<0.05). Conversely, for the obstacle stepping performance, larger dual task costs were observed for the VST as compared to the AST for clearance measures (the VST induced larger clearance values for both the leading and trailing feet), indicating VST tasks caused greater interference for obstacle crossing (p<0.05). These results supported the hypothesis that structural interference has a larger effect on motor performance in a dual-task situation compared to cognitive tasks that pose interference at only the central processing stage. Copyright © 2016 Elsevier B.V. All rights reserved.
Plastic Bags and Environmental Pollution
ERIC Educational Resources Information Center
Sang, Anita Ng Heung
2010-01-01
The "Hong Kong Visual Arts Curriculum Guide," covering Primary 1 to Secondary 3 grades (Curriculum Development Committee, 2003), points to three domains of learning in visual arts: (1) visual arts knowledge; (2) visual arts appreciation and criticism; and (3) visual arts making. The "Guide" suggests learning should develop…
Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention.
ERIC Educational Resources Information Center
Chun, Marvin M.; Jiang, Yuhong
1998-01-01
Six experiments involving a total of 112 college students demonstrate that a robust memory for visual context exists to guide spatial attention. Results show how implicit learning and memory of visual context can guide spatial attention toward task-relevant aspects of a scene. (SLD)
NASA Astrophysics Data System (ADS)
Gleghorn, Jason P.; Smith, James P.; Kirby, Brian J.
2013-09-01
Microfluidic obstacle arrays have been used in numerous applications, and their ability to sort particles or capture rare cells from complex samples has broad and impactful applications in biology and medicine. We have investigated the transport and collision dynamics of particles in periodic obstacle arrays to guide the design of convective, rather than diffusive, transport-based immunocapture microdevices. Ballistic and full computational fluid dynamics simulations are used to understand the collision modes that evolve in cylindrical obstacle arrays with various geometries. We identify previously unrecognized collision mode structures and differential size-based collision frequencies that emerge from these arrays. Previous descriptions of transverse displacements that assume unidirectional flow in these obstacle arrays cannot capture mode transitions properly as these descriptions fail to capture the dependence of the mode transitions on column spacing and the attendant change in the flow field. Using these analytical and computational simulations, we elucidate design parameters that induce high collision rates for all particles larger than a threshold size or selectively increase collision frequencies for a narrow range of particle sizes within a polydisperse population. Furthermore, we investigate how the particle Péclet number affects collision dynamics and mode transitions and demonstrate that experimental observations from various obstacle array geometries are well described by our computational model.
Threat captures attention but does not affect learning of contextual regularities.
Yamaguchi, Motonori; Harwood, Sarah L
2017-04-01
Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.
Timmis, Matthew A; Bijl, Herre; Turner, Kieran; Basevitch, Itay; Taylor, Matthew J D; van Paridon, Kjell N
2017-01-01
Pedestrians regularly engage with their mobile phone whilst walking. The current study investigated how mobile phone use affects where people look (visual search behaviour) and how they negotiate a floor based hazard placed along the walking path. Whilst wearing a mobile eye tracker and motion analysis sensors, participants walked up to and negotiated a surface height change whilst writing a text, reading a text, talking on the phone, or without a phone. Differences in gait and visual search behaviour were found when using a mobile phone compared to when not using a phone. Using a phone resulted in looking less frequently and for less time at the surface height change, which led to adaptations in gait by negotiating it in a manner consistent with adopting an increasingly cautious stepping strategy. When using a mobile phone, writing a text whilst walking resulted in the greatest adaptions in gait and visual search behaviour compared to reading a text and talking on a mobile phone. Findings indicate that mobile phone users were able to adapt their visual search behaviour and gait to incorporate mobile phone use in a safe manner when negotiating floor based obstacles.
Bijl, Herre; Turner, Kieran; Basevitch, Itay; Taylor, Matthew J. D.; van Paridon, Kjell N.
2017-01-01
Pedestrians regularly engage with their mobile phone whilst walking. The current study investigated how mobile phone use affects where people look (visual search behaviour) and how they negotiate a floor based hazard placed along the walking path. Whilst wearing a mobile eye tracker and motion analysis sensors, participants walked up to and negotiated a surface height change whilst writing a text, reading a text, talking on the phone, or without a phone. Differences in gait and visual search behaviour were found when using a mobile phone compared to when not using a phone. Using a phone resulted in looking less frequently and for less time at the surface height change, which led to adaptations in gait by negotiating it in a manner consistent with adopting an increasingly cautious stepping strategy. When using a mobile phone, writing a text whilst walking resulted in the greatest adaptions in gait and visual search behaviour compared to reading a text and talking on a mobile phone. Findings indicate that mobile phone users were able to adapt their visual search behaviour and gait to incorporate mobile phone use in a safe manner when negotiating floor based obstacles. PMID:28665942
Roentgen, Uta R; Gelderblom, Gert Jan; de Witte, Luc P
2012-01-01
To develop a suitable mobility course for the assessment of mobility performance as part of a user evaluation of Electronic Mobility Aids (EMA) aimed at obstacle detection and orientation. A review of the literature led to a list of critical factors for the assessment of mobility performance of persons who are visually impaired. Based upon that list, method, test situations, and determining elements were selected and presented to Dutch orientation and mobility experts. Due to expert advice and a pilot study, minor changes were made and the final version was used for the evaluation of two EMA by eight persons who are visually impaired. The results of the literature study are summarized in an overview of critical factors for the assessment of the mobility performance of persons who are visually impaired. Applied to the requirements of the above mentioned user evaluation a replicable indoor mobility course has been described in detail and tested. Based upon evidence from literature an indoor mobility course has been developed, which was sensitive to assess differences in mobility incidents and obstacle detection when using an EMA compared to the regular mobility aid. Experts' opinion confirmed its face and content validity.
Parameswaran, Vidhya; Anilkumar, S.; Lylajam, S.; Rajesh, C.; Narayan, Vivek
2016-01-01
Background and Objectives: This in vitro study compared the shade matching abilities of an intraoral spectrophotometer and the conventional visual method using two shade guides. The results of previous investigations between color perceived by human observers and color assessed by instruments have been inconclusive. The objectives were to determine accuracies and interrater agreement of both methods and effectiveness of two shade guides with either method. Methods: In the visual method, 10 examiners with normal color vision matched target control shade tabs taken from the two shade guides (VITAPAN Classical™ and VITAPAN 3D Master™) with other full sets of the respective shade guides. Each tab was matched 3 times to determine repeatability of visual examiners. The spectrophotometric shade matching was performed by two independent examiners using an intraoral spectrophotometer (VITA Easyshade™) with five repetitions for each tab. Results: Results revealed that visual method had greater accuracy than the spectrophotometer. The spectrophotometer; however, exhibited significantly better interrater agreement as compared to the visual method. While VITAPAN Classical shade guide was more accurate with the spectrophotometer, VITAPAN 3D Master shade guide proved better with visual method. Conclusion: This in vitro study clearly delineates the advantages and limitations of both methods. There were significant differences between the methods with the visual method producing more accurate results than the spectrophotometric method. The spectrophotometer showed far better interrater agreement scores irrespective of the shade guide used. Even though visual shade matching is subjective, it is not inferior and should not be underrated. Judicious combination of both techniques is imperative to attain a successful and esthetic outcome. PMID:27746599
A bio-inspired kinematic controller for obstacle avoidance during reaching tasks with real robots.
Srinivasa, Narayan; Bhattacharyya, Rajan; Sundareswara, Rashmi; Lee, Craig; Grossberg, Stephen
2012-11-01
This paper describes a redundant robot arm that is capable of learning to reach for targets in space in a self-organized fashion while avoiding obstacles. Self-generated movement commands that activate correlated visual, spatial and motor information are used to learn forward and inverse kinematic control models while moving in obstacle-free space using the Direction-to-Rotation Transform (DIRECT). Unlike prior DIRECT models, the learning process in this work was realized using an online Fuzzy ARTMAP learning algorithm. The DIRECT-based kinematic controller is fault tolerant and can handle a wide range of perturbations such as joint locking and the use of tools despite not having experienced them during learning. The DIRECT model was extended based on a novel reactive obstacle avoidance direction (DIRECT-ROAD) model to enable redundant robots to avoid obstacles in environments with simple obstacle configurations. However, certain configurations of obstacles in the environment prevented the robot from reaching the target with purely reactive obstacle avoidance. To address this complexity, a self-organized process of mental rehearsals of movements was modeled, inspired by human and animal experiments on reaching, to generate plans for movement execution using DIRECT-ROAD in complex environments. These mental rehearsals or plans are self-generated by using the Fuzzy ARTMAP algorithm to retrieve multiple solutions for reaching each target while accounting for all the obstacles in its environment. The key aspects of the proposed novel controller were illustrated first using simple examples. Experiments were then performed on real robot platforms to demonstrate successful obstacle avoidance during reaching tasks in real-world environments. Copyright © 2012 Elsevier Ltd. All rights reserved.
Advanced Augmented White Cane with obstacle height and distance feedback.
Pyun, Rosali; Kim, Yeongmi; Wespe, Pascal; Gassert, Roger; Schneller, Stefan
2013-06-01
The white cane is a widely used mobility aid that helps visually impaired people navigate the surroundings. While it reliably and intuitively extends the detection range of ground-level obstacles and drop-offs to about 1.2 m, it lacks the ability to detect trunk and head-level obstacles. Electronic Travel Aids (ETAs) have been proposed to overcome these limitations, but have found minimal adoption due to limitations such as low information content and low reliability thereof. Although existing ETAs extend the sensing range beyond that of the conventional white cane, most of them do not detect head-level obstacles and drop-offs, nor can they identify the vertical extent of obstacles. Furthermore, some ETAs work independent of the white cane, and thus reliable detection of surface textures and drop-offs is not provided. This paper introduces a novel ETA, the Advanced Augmented White Cane, which detects obstacles at four vertical levels and provides multi-sensory feedback. We evaluated the device in five blindfolded subjects through reaction time measurements following the detection of an obstacle, as well as through the reliability of dropoff detection. The results showed that our aid could help the user successfully detect an obstacle and identify its height, with an average reaction time of 410 msec. Drop-offs were reliably detected with an intraclass correlation > 0.95. This work is a first step towards a low-cost ETA to complement the functionality of the conventional white cane.
Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A. F.
2004-01-01
We continuously scan the visual world via rapid or saccadic eye movements. Such eye movements are guided by visual information, and thus the oculomotor structures that determine when and where to look need visual information to control the eye movements. To know whether visual areas contain activity that may contribute to the control of eye movements, we recorded neural responses in the visual cortex of monkeys engaged in a delayed figure-ground detection task and analyzed the activity during the period of oculomotor preparation. We show that ≈100 ms before the onset of visually and memory-guided saccades neural activity in V1 becomes stronger where the strongest presaccadic responses are found at the location of the saccade target. In addition, in memory-guided saccades the strength of presaccadic activity shows a correlation with the onset of the saccade. These findings indicate that the primary visual cortex contains saccade-related responses and participates in visually guided oculomotor behavior. PMID:14970334
Cognitive Control Network Contributions to Memory-Guided Visual Attention
Rosen, Maya L.; Stern, Chantal E.; Michalka, Samantha W.; Devaney, Kathryn J.; Somers, David C.
2016-01-01
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. PMID:25750253
[Cortical potentials evoked to response to a signal to make a memory-guided saccade].
Slavutskaia, M V; Moiseeva, V V; Shul'govskiĭ, V V
2010-01-01
The difference in parameters of visually guided and memory-guided saccades was shown. Increase in the memory-guided saccade latency as compared to that of the visually guided saccades may indicate the deceleration of saccadic programming on the basis of information extraction from the memory. The comparison of parameters and topography of evoked components N1 and P1 of the evoked potential on the signal to make a memory- or visually guided saccade suggests that the early stage of the saccade programming associated with the space information processing is performed predominantly with top-down attention mechanism before the memory-guided saccade and bottom-up mechanism before the visually guided saccade. The findings show that the increase in the latency of the memory-guided saccades is connected with decision making at the central stage of the saccade programming. We proposed that wave N2, which develops in the middle of the latent period of the memory-guided saccades, is correlated with this process. Topography and spatial dynamics of components N1, P1 and N2 testify that the memory-guided saccade programming is controlled by the frontal mediothalamic system of selective attention and left-hemispheric brain mechanisms of motor attention.
1988-04-01
solution to a information. There is thus a biological motivation for investi- specific problem, e.g., solving the visual obstacle avoidance gating the...narticular practically motivated aspect of the image, known as the optical flow, does not necessarily the general problem. correspond to the 2-D motion...on (Z Z * "inexact" vision jThom8fi] The obvious motivation stems from a = X tancosa b - Z tan3sina; (1) the fact that an obstacle in relative motion
Unsteady flow characteristics in the near-wake of a two-dimensional obstacle
NASA Technical Reports Server (NTRS)
Dyment, A.; Gryson, P.
1984-01-01
The influence of the characteristics of the boundary layer separation on the formation of vortices and alternate paths in the wake of a bidimensional obstacle at high Reynolds numbers was studied by ultra fast visualization system. It is shown that there are alternate paths for laminar and turbulent flows, with similar flow characteristics. It is found that emission of vortices does not change substantially when the flow passes from laminar to turbulent. A film with a time scale change of 10,000 times illustrates some of the discussed phenomena.
Visual Arts: A Guide to Curriculum Development in the Arts.
ERIC Educational Resources Information Center
Iowa State Dept. of Public Instruction, Des Moines.
This visual arts curriculum guide was developed as a subset of a model curriculum for the arts as mandated by the Iowa legislature. It is designed to be used in conjunction with the Visual Arts in Iowa Schools (VAIS). The guide is divided into six sections: Sections one and two contain the preface, acknowledgements, and a list of members of the…
The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search
ERIC Educational Resources Information Center
Becker, Stefanie I.
2010-01-01
Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…
Souza Silva, Wagner; Aravind, Gayatri; Sangani, Samir; Lamontagne, Anouk
2018-03-01
This study examines how three types of obstacles (cylinder, virtual human and virtual human with footstep sounds) affect circumvention strategies of healthy young adults. Sixteen participants aged 25.2 ± 2.5 years (mean ± 1SD) were tested while walking overground and viewing a virtual room through a helmet mounted display. As participants walked towards a stationary target in the far space, they avoided an obstacle (cylinder or virtual human) approaching either from the right (+40°), left (-40°) or head-on (0°). Obstacle avoidance strategies were characterized using the position and orientation of the head. Repeated mixed model analysis showed smaller minimal distances (p = 0.007) while avoiding virtual humans as compared to cylinders. Footstep sounds added to virtual humans did not modify (p = 0.2) minimal distances compared to when no sound was provided. Onset times of avoidance strategies were similar across conditions (p = 0.06). Results indicate that the nature of the obstacle (human-like vs. non-human object) matters and can modify avoidance strategies. Smaller obstacle clearances in response to virtual humans may reflect the use of a less conservative avoidance strategy, due to a resemblance of obstacles to pedestrians and a recall of strategies used in daily locomotion. The lack of influence of footstep sounds supports the fact that obstacle avoidance primarily relies on visual cues and the principle of 'inverse effectiveness' whereby multisensory neurons' response to multimodal stimuli becomes weaker when the unimodal sensory stimulus (vision) is strong. Present findings should be taken into consideration to optimize the ecological validity of VR-based obstacle avoidance paradigms used in research and rehabilitation. Copyright © 2018 Elsevier B.V. All rights reserved.
Keshavan, J; Gremillion, G; Escobar-Alvarez, H; Humbert, J S
2014-06-01
Safe, autonomous navigation by aerial microsystems in less-structured environments is a difficult challenge to overcome with current technology. This paper presents a novel visual-navigation approach that combines bioinspired wide-field processing of optic flow information with control-theoretic tools for synthesis of closed loop systems, resulting in robustness and performance guarantees. Structured singular value analysis is used to synthesize a dynamic controller that provides good tracking performance in uncertain environments without resorting to explicit pose estimation or extraction of a detailed environmental depth map. Experimental results with a quadrotor demonstrate the vehicle's robust obstacle-avoidance behaviour in a straight line corridor, an S-shaped corridor and a corridor with obstacles distributed in the vehicle's path. The computational efficiency and simplicity of the current approach offers a promising alternative to satisfying the payload, power and bandwidth constraints imposed by aerial microsystems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SSM, established in accordance with this part which is provided by the appropriate traffic control... placed between opposing highway lanes designed to alert or guide traffic around an obstacle or to direct... acceptable channelization devices for purposes of this part. Additional design specifications are determined...
Kostyra, Eliza; Żakowska-Biemans, Sylwia; Śniegocka, Katarzyna; Piotrowska, Anna
2017-06-01
The number of visually impaired and blind people is rising worldwide due to ageing of the global population, but research regarding the impact of visual impairment on the ability of a person to choose food and to prepare meals is scarce. The aim of this study was threefold: to investigate factors determining the choices of food products in people with various levels of impaired vision; to identify obstacles they face while purchasing food, preparing meals and eating out; and to determine what would help them in the areas of food shopping and meal preparation. The data was collected from 250 blind and visually impaired subjects, recruited with the support of the National Association of the Blind. The study revealed that majority of the visually impaired make food purchases at a supermarket or local grocery and they tend to favour shopping for food via the Internet. Direct sale channels like farmers markets were rarely used by the visually impaired. The most frequently mentioned factors that facilitated their food shopping decisions were the assistance of salespersons, product labelling in Braille, scanners that enable the reading of labels and a permanent place for products on the shop shelves. Meal preparation, particularly peeling, slicing and frying, posed many challenges to the visually impaired. More than half of the respondents ate meals outside the home, mainly with family or friends. The helpfulness of the staff and a menu in Braille were crucial for them to have a positive dining out experience. The results of the study provide valuable insights into the food choices and eating experiences of visually impaired people, and also suggest some practical implications to improve their independence and quality of life. Copyright © 2017 Elsevier Ltd. All rights reserved.
Melis-Dankers, Bart J. M.; Brouwer, Wiebo H.; Tucha, Oliver; Heutink, Joost
2016-01-01
Introduction People with homonymous visual field defects (HVFD) often report difficulty detecting obstacles in the periphery on their blind side in time when moving around. Recently, a randomized controlled trial showed that the InSight-Hemianopia Compensatory Scanning Training (IH-CST) specifically improved detection of peripheral stimuli and avoiding obstacles when moving around, especially in dual task situations. Method The within-group training effects of the previously reported IH-CST are examined in an extended patient group. Performance of patients with HVFD on a pre-assessment, post-assessment and follow-up assessment and performance of a healthy control group are compared. Furthermore, it is examined whether training effects can be predicted by demographic characteristics, variables related to the visual disorder, and neuropsychological test results. Results Performance on both subjective and objective measures of mobility-related scanning was improved after training, while no evidence was found for improvement in visual functions (including visual fields), reading, visual search and dot counting. Self-reported improvement did not correlate with improvement in objective mobility performance. According to the participants, the positive effects were still present six to ten months after training. No demographic characteristics, variables related to the visual disorder, and neuropsychological test results were found to predict the size of training effect, although some inconclusive evidence was found for more improvement in patients with left-sided HVFD than in patients with right-sided HFVD. Conclusion Further support was found for a positive effect of IH-CST on detection of visual stimuli during mobility-related activities specifically. Based on the reports given by patients, these effects appear to be long-term effects. However, no conclusions can be drawn on the objective long-term training effects. PMID:27935973
Loh, Alvona Zi Hui; Tan, Julia Shi Yu; Lee, Jeannette Jen-Mai; Koh, Gerald Choon-Huat
2015-01-01
Purpose In medical school, students may participate in various community involvement projects (CIP), which serve disadvantaged communities. However, several obstacles may arise during these projects. The authors conducted a qualitative study with the primary aim of understanding the obstacles and corresponding potential solutions when medical students in Singapore participate in local CIP (LCIP) and overseas CIP (OCIP). Design The authors recruited medical students from Yong Loo Lin School of Medicine, National University of Singapore, who were also leaders of a specific community service project done in medical school. Twelve one-to-one interviews were held for the participants from 6 to 8 January 2013. Participants were led in a discussion based on an interview guide. The interviews were audio-recorded and transcribed into free-flow text. Subsequently, content and thematic analyses of the transcripts were performed independently by three researchers. Results The medical students faced many common obstacles during their community service projects. These obstacles include difficulties in recruiting and managing volunteers, attaining recognition or credibility for the project to acquire funding and resources, adjusting to a different culture or language, setting goals, and facing project-specific obstacles. Potential solutions were offered for some obstacles, such as building a strong executive committee for the project, grooming successive batches of leaders, and improving the project's public image, mentorship, reflections, and sustainability plans. Conclusions Mentorship, reflections, and sustainability are potential solutions that have been proposed to tackle the obstacles faced during community service participation in medical school. However, there may still be difficulty in solving some of the problems even after these measures are put into practice. Future research may focus on evaluating the effectiveness of these suggested solutions. PMID:26490690
Loh, Alvona Zi Hui; Tan, Julia Shi Yu; Lee, Jeannette Jen-Mai; Koh, Gerald Choon-Huat
2015-01-01
In medical school, students may participate in various community involvement projects (CIP), which serve disadvantaged communities. However, several obstacles may arise during these projects. The authors conducted a qualitative study with the primary aim of understanding the obstacles and corresponding potential solutions when medical students in Singapore participate in local CIP (LCIP) and overseas CIP (OCIP). The authors recruited medical students from Yong Loo Lin School of Medicine, National University of Singapore, who were also leaders of a specific community service project done in medical school. Twelve one-to-one interviews were held for the participants from 6 to 8 January 2013. Participants were led in a discussion based on an interview guide. The interviews were audio-recorded and transcribed into free-flow text. Subsequently, content and thematic analyses of the transcripts were performed independently by three researchers. The medical students faced many common obstacles during their community service projects. These obstacles include difficulties in recruiting and managing volunteers, attaining recognition or credibility for the project to acquire funding and resources, adjusting to a different culture or language, setting goals, and facing project-specific obstacles. Potential solutions were offered for some obstacles, such as building a strong executive committee for the project, grooming successive batches of leaders, and improving the project's public image, mentorship, reflections, and sustainability plans. Mentorship, reflections, and sustainability are potential solutions that have been proposed to tackle the obstacles faced during community service participation in medical school. However, there may still be difficulty in solving some of the problems even after these measures are put into practice. Future research may focus on evaluating the effectiveness of these suggested solutions.
3D-Sonification for Obstacle Avoidance in Brownout Conditions
NASA Technical Reports Server (NTRS)
Godfroy-Cooper, M.; Miller, J. D.; Szoboszlay, Z.; Wenzel, E. M.
2017-01-01
Helicopter brownout is a phenomenon that occurs when making landing approaches in dusty environments, whereby sand or dust particles become swept up in the rotor outwash. Brownout is characterized by partial or total obscuration of the terrain, which degrades visual cues necessary for hovering and safe landing. Furthermore, the motion of the dust cloud produced during brownout can lead to the pilot experiencing motion cue anomalies such as vection illusions. In this context, the stability and guidance control functions can be intermittently or continuously degraded, potentially leading to undetected surface hazards and obstacles as well as unnoticed drift. Safe and controlled landing in brownout can be achieved using an integrated presentation of LADAR and RADAR imagery and aircraft state symbology. However, though detected by the LADAR and displayed on the sensor image, small obstacles can be difficult to discern from the background so that changes in obstacle elevation may go unnoticed. Moreover, pilot workload associated with tracking the displayed symbology is often so high that the pilot cannot give sufficient attention to the LADAR/RADAR image. This paper documents a simulation evaluating the use of 3D auditory cueing for obstacle avoidance in brownout as a replacement for or compliment to LADAR/RADAR imagery.
Cell-fusion method to visualize interphase nuclear pore formation.
Maeshima, Kazuhiro; Funakoshi, Tomoko; Imamoto, Naoko
2014-01-01
In eukaryotic cells, the nucleus is a complex and sophisticated organelle that organizes genomic DNA to support essential cellular functions. The nuclear surface contains many nuclear pore complexes (NPCs), channels for macromolecular transport between the cytoplasm and nucleus. It is well known that the number of NPCs almost doubles during interphase in cycling cells. However, the mechanism of NPC formation is poorly understood, presumably because a practical system for analysis does not exist. The most difficult obstacle in the visualization of interphase NPC formation is that NPCs already exist after nuclear envelope formation, and these existing NPCs interfere with the observation of nascent NPCs. To overcome this obstacle, we developed a novel system using the cell-fusion technique (heterokaryon method), previously also used to analyze the shuttling of macromolecules between the cytoplasm and the nucleus, to visualize the newly synthesized interphase NPCs. In addition, we used a photobleaching approach that validated the cell-fusion method. We recently used these methods to demonstrate the role of cyclin-dependent protein kinases and of Pom121 in interphase NPC formation in cycling human cells. Here, we describe the details of the cell-fusion approach and compare the system with other NPC formation visualization methods. Copyright © 2014 Elsevier Inc. All rights reserved.
Visual Outcomes After LASIK Using Topography-Guided vs Wavefront-Guided Customized Ablation Systems.
Toda, Ikuko; Ide, Takeshi; Fukumoto, Teruki; Tsubota, Kazuo
2016-11-01
To evaluate the visual performance of two customized ablation systems (wavefront-guided ablation and topography-guided ablation) in LASIK. In this prospective, randomized clinical study, 68 eyes of 35 patients undergoing LASIK were enrolled. Patients were randomly assigned to wavefront-guided ablation using the iDesign aberrometer and STAR S4 IR Excimer Laser system (Abbott Medical Optics, Inc., Santa Ana, CA) (wavefront-guided group; 32 eyes of 16 patients; age: 29.0 ± 7.3 years) or topography-guided ablation using the OPD-Scan aberrometer and EC-5000 CXII excimer laser system (NIDEK, Tokyo, Japan) (topography-guided group; 36 eyes of 19 patients; age: 36.1 ± 9.6 years). Preoperative manifest refraction was -4.92 ± 1.95 diopters (D) in the wavefront-guided group and -4.44 ± 1.98 D in the topography-guided group. Visual function and subjective symptoms were compared between groups before and 1 and 3 months after LASIK. Of seven subjective symptoms evaluated, four were significantly milder in the wavefront-guided group at 3 months. Contrast sensitivity with glare off at low spatial frequencies (6.3° and 4°) was significantly higher in the wavefront-guided group. Uncorrected and corrected distance visual acuity, manifest refraction, and higher order aberrations measured by OPD-Scan and iDesign were not significantly different between the two groups at 1 and 3 months after LASIK. Both customized ablation systems used in LASIK achieved excellent results in predictability and visual function. The wavefront-guided ablation system may have some advantages in the quality of vision. It may be important to select the appropriate system depending on eye conditions such as the pattern of total and corneal higher order aberrations. [J Refract Surg. 2016;32(11):727-732.]. Copyright 2016, SLACK Incorporated.
A new primary mobility tool for the visually impaired: A white cane-adaptive mobility device hybrid.
Rizzo, John-Ross; Conti, Kyle; Thomas, Teena; Hudson, Todd E; Wall Emerson, Robert; Kim, Dae Shik
2017-05-16
This article describes pilot testing of an adaptive mobility device-hybrid (AMD-H) combining properties of two primary mobility tools for people who are blind: the long cane and adaptive mobility devices (AMDs). The long cane is the primary mobility tool used by people who are blind and visually impaired for independent and safe mobility and AMDs are adaptive devices that are often lightweight frames approximately body width in lateral dimension that are simply pushed forward to clear the space in front of a person. The prototype cane built for this study had a wing apparatus that could be folded around the shaft of a cane but when unfolded, deployed two wheeled wings 25 cm (9.8 in) to each side of the canetip. This project explored drop-off and obstacle detection for 6 adults with visual impairment using the deployed AMD-H and a standard long cane. The AMD-H improved obstacle detection overall, and was most effective for the smallest obstacles (2 and 6 inch diameter). The AMD-H cut the average drop off threshold from 1.79 inches (4.55 cm) to .96 inches (2.44 cm). All participants showed a decrease in drop off detection threshold and an increase in detection rate (13.9% overall). For drop offs of 1 in (2.54 cm) and 3 in (7.62 cm), all participants showed large improvements with the AMD-H, ranging from 8.4 to 50%. The larger drop offs of 5 in (12.7 cm) and 7 in (17.8 cm) were well detected by both types of canes.
Effect of cane length and swing arc width on drop-off and obstacle detection with the long cane
Kim, Dae Shik; Emerson, Robert Wall; Naghshineh, Koorosh
2017-01-01
A repeated-measures design with block randomization was used for the study, in which 15 adults with visual impairments attempted to detect the drop-offs and obstacles with the canes of different lengths, swinging the cane in different widths (narrow vs wide). Participants detected the drop-offs significantly more reliably with the standard-length cane (79.5% ± 6.5% of the time) than with the extended-length cane (67.6% ± 9.1%), p <.001. The drop-off detection threshold of the standard-length cane (4.1 ± 1.1 cm) was also significantly smaller than that of the extended-length cane (6.5±1.8cm), p <.001. In addition, participants detected drop-offs at a significantly higher percentage when they swung the cane approximately 3 cm beyond the widest part of the body (78.6% ± 7.6%) than when they swung it substantially wider (30 cm; 68.5% ± 8.3%), p <.001. In contrast, neither cane length (p =.074) nor cane swing arc width (p =.185) had a significant effect on obstacle detection performance. The findings of the study may help orientation and mobility specialists recommend appropriate cane length and cane swing arc width to visually impaired cane users. PMID:29276326
Effect of cane length and swing arc width on drop-off and obstacle detection with the long cane.
Kim, Dae Shik; Emerson, Robert Wall; Naghshineh, Koorosh
2017-09-01
A repeated-measures design with block randomization was used for the study, in which 15 adults with visual impairments attempted to detect the drop-offs and obstacles with the canes of different lengths, swinging the cane in different widths (narrow vs wide). Participants detected the drop-offs significantly more reliably with the standard-length cane (79.5% ± 6.5% of the time) than with the extended-length cane (67.6% ± 9.1%), p <.001. The drop-off detection threshold of the standard-length cane (4.1 ± 1.1 cm) was also significantly smaller than that of the extended-length cane (6.5±1.8cm), p <.001. In addition, participants detected drop-offs at a significantly higher percentage when they swung the cane approximately 3 cm beyond the widest part of the body (78.6% ± 7.6%) than when they swung it substantially wider (30 cm; 68.5% ± 8.3%), p <.001. In contrast, neither cane length ( p =.074) nor cane swing arc width ( p =.185) had a significant effect on obstacle detection performance. The findings of the study may help orientation and mobility specialists recommend appropriate cane length and cane swing arc width to visually impaired cane users.
The effect of aborting ongoing movements on end point position estimation.
Itaguchi, Yoshihiro; Fukuzawa, Kazuyoshi
2013-11-01
The present study investigated the impact of motor commands to abort ongoing movement on position estimation. Participants carried out visually guided reaching movements on a horizontal plane with their eyes open. By setting a mirror above their arm, however, they could not see the arm, only the start and target points. They estimated the position of their fingertip based solely on proprioception after their reaching movement was stopped before reaching the target. The participants stopped reaching as soon as they heard an auditory cue or were mechanically prevented from moving any further by an obstacle in their path. These reaching movements were carried out at two different speeds (fast or slow). It was assumed that additional motor commands to abort ongoing movement were required and that their magnitude was high, low, and zero, in the auditory-fast condition, the auditory-slow condition, and both the obstacle conditions, respectively. There were two main results. (1) When the participants voluntarily stopped a fast movement in response to the auditory cue (the auditory-fast condition), they showed more underestimates than in the other three conditions. This underestimate effect was positively related to movement velocity. (2) An inverted-U-shaped bias pattern as a function of movement distance was observed consistently, except in the auditory-fast condition. These findings indicate that voluntarily stopping fast ongoing movement created a negative bias in the position estimate, supporting the idea that additional motor commands or efforts to abort planned movement are involved with the position estimation system. In addition, spatially probabilistic inference and signal-dependent noise may explain the underestimate effect of aborting ongoing movement.
ERIC Educational Resources Information Center
Ehresman, Paul
1995-01-01
A precane device, called the "free-standing cane," was developed to help children with blindness along with other disabilities. The cane detects obstacles; guides the user's hands into a relaxed, static position in front of the hips; facilitates postural security and control; and offers tactile and kinesthetic feedback. (JDD)
Access to jobs : a guide to innovative practices in welfare-to-work transportation
DOT National Transportation Integrated Search
1998-01-01
This series of short related articles looks at the role of transportation in supporting welfare to work reform. These articles find lack of timely affordable public transit is a major obstacle for welfare recipients attempting to find work. Strategie...
Malik, Raza Naseem; Cote, Rachel; Lam, Tania
2017-01-01
Skilled walking, such as obstacle crossing, is an essential component of functional mobility. Sensorimotor integration of visual and proprioceptive inputs is important for successful obstacle crossing. The objective of this study was to understand how proprioceptive deficits affect obstacle-crossing strategies when controlling for variations in motor deficits in ambulatory individuals with spinal cord injury (SCI). Fifteen ambulatory individuals with SCI and 15 able-bodied controls were asked to step over an obstacle scaled to their motor abilities under full and obstructed vision conditions. An eye tracker was used to determine gaze behaviour and motion capture analysis was used to determine toe kinematics relative to the obstacle. Combined, bilateral hip and knee proprioceptive sense (joint position sense and movement detection sense) was assessed using the Lokomat and customized software controls. Combined, bilateral hip and knee proprioceptive sense in subjects with SCI varied and was significantly different from able-bodied subjects. Subjects with greater proprioceptive deficits stepped higher over the obstacle with their lead and trail limbs in the obstructed vision condition compared with full vision. Subjects with SCI also glanced at the obstacle more frequently and with longer fixation times compared with controls, but this was not related to proprioceptive sense. This study indicates that ambulatory individuals with SCI rely more heavily on vision to cross obstacles and show impairments in key gait parameters required for successful obstacle crossing. Our data suggest that proprioceptive deficits need to be considered in rehabilitation programs aimed at improving functional mobility in ambulatory individuals with SCI. This work is unique since it examines the contribution of combined, bilateral hip and knee proprioceptive sense on the recovery of skilled walking function, in addition to characterizing gaze behavior during a skilled walking task in people with motor-incomplete spinal cord injury. Copyright © 2017 the American Physiological Society.
Cognitive Control Network Contributions to Memory-Guided Visual Attention.
Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C
2016-05-01
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network(CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
The painful muse: migrainous artistic archetypes from visual cortex.
Aguggia, Marco; Grassi, Enrico
2014-05-01
Neurological diseases which constituted traditionally obstacles to artistic creation can, in the case of migraine, be transformed by the artists into a source of inspiration and artistic production. These phenomena represent a chapter of a broader embryonic neurobiology of painting.
Selling to Industry for Sheltered Workshops.
ERIC Educational Resources Information Center
Rehabilitation Services Administration (DHEW), Washington, DC.
Intended for staffs of sheltered workshops for handicapped individuals, the guide presents a plan for selling the workshop idea to industry, hints on meeting obstacles, and ideas for expanding and upgrading workshop contract promotion. Brief sections cover the following topics (example subtopics are in parentheses): finding work contract prospects…
Visual short-term memory guides infants' visual attention.
Mitsven, Samantha G; Cantrell, Lisa M; Luck, Steven J; Oakes, Lisa M
2018-08-01
Adults' visual attention is guided by the contents of visual short-term memory (VSTM). Here we asked whether 10-month-old infants' (N = 41) visual attention is also guided by the information stored in VSTM. In two experiments, we modified the one-shot change detection task (Oakes, Baumgartner, Barrett, Messenger, & Luck, 2013) to create a simplified cued visual search task to ask how information stored in VSTM influences where infants look. A single sample item (e.g., a colored circle) was presented at fixation for 500 ms, followed by a brief (300 ms) retention interval and then a test array consisting of two items, one on each side of fixation. One item in the test array matched the sample stimulus and the other did not. Infants were more likely to look at the non-matching item than at the matching item, demonstrating that the information stored rapidly in VSTM guided subsequent looking behavior. Copyright © 2018 Elsevier B.V. All rights reserved.
Confocal microscopy to guide laser ablation of basal cell carinoma: a preliminary feasibility study
NASA Astrophysics Data System (ADS)
Larson, Bjorg A.; Sierra, Heidy; Chen, Jason; Rajadhyaksha, Milind
2013-03-01
Laser ablation may be a promising method for removal of skin lesions, with the potential for better cosmetic outcomes and reduced scarring and infection. An obstacle to implementing laser ablation is that the treatment leaves no tissue for histopathological analysis. Pre-operative and intra-operative mapping of BCCs using confocal microscopy may guide the ablation of the tumor until all tumor is removed. We demonstrate preliminary feasibility of confocal microscopy to guide laser ablation of BCCs in freshly excised tissue from Mohs surgery. A 2940 nm Er:YAG laser provides efficient ablation of tumor with reduced thermal damage to the surrounding tissue.
Three-dimensional obstacle classification in laser range data
NASA Astrophysics Data System (ADS)
Armbruster, Walter; Bers, Karl-Heinz
1998-10-01
The threat of hostile surveillance and weapon systems require military aircraft to fly under extreme conditions such as low altitude, high speed, poor visibility and incomplete terrain information. The probability of collision with natural and man-made obstacles during such contour missions is high if detection capability is restricted to conventional vision aids. Forward-looking scanning laser rangefinders which are presently being flight tested and evaluated at German proving grounds, provide a possible solution, having a large field of view, high angular and range resolution, a high pulse repetition rate, and sufficient pulse energy to register returns from wires at over 500 m range (depends on the system) with a high hit-and-detect probability. Despite the efficiency of the sensor, acceptance of current obstacle warning systems by test pilots is not very high, mainly due to the systems' inadequacies in obstacle recognition and visualization. This has motivated the development and the testing of more advanced 3d-scene analysis algorithm at FGAN-FIM to replace the obstacle recognition component of current warning systems. The basic ideas are to increase the recognition probability and to reduce the false alarm rate for hard-to-extract obstacles such as wires, by using more readily recognizable objects such as terrain, poles, pylons, trees, etc. by implementing a hierarchical classification procedure to generate a parametric description of the terrain surface as well as the class, position, orientation, size and shape of all objects in the scene. The algorithms can be used for other applications such as terrain following, autonomous obstacle avoidance, and automatic target recognition.
Averting Uncertainty: A Practical Guide to Physical Activity Research in Australian Schools
ERIC Educational Resources Information Center
Rachele, Jerome N.; Cuddihy, Thomas F.; Washington, Tracy L.; McPhail, Steven M.
2013-01-01
Preventative health has become central to contemporary health care, identifying youth physical activity as a key factor in determining health and functioning. Schools offer a unique research setting due to distinctive methodological circumstances. However, school-based researchers face several obstacles in their endeavour to complete successful…
A Mobile Learning Module for High School Fieldwork
ERIC Educational Resources Information Center
Hsu, Tzu-Yen; Chen, Che-Ming
2010-01-01
Although fieldwork is always cited as an important component of geographic education, there are many obstacles for executing high school fieldwork. Mobile electronic products are becoming popular and some schools are able to acquire these devices for mobile learning. This study attempts to provide a mobile-assisted means of guiding students…
PS2-06: Best Practices for Advancing Multi-site Chart Abstraction Research
Blick, Noelle; Cole, Deanna; King, Colleen; Riordan, Rick; Von Worley, Ann; Yarbro, Patty
2012-01-01
Background/Aims Multi-site chart abstraction studies are becoming increasingly common within the HMORN. Differences in systems among HMORN sites can pose significant obstacles to the success of these studies. It is therefore crucial to standardize abstraction activities by following best practices for multi-site chart abstraction, as consistency of processes across sites will increase efficiencies and enhance data quality. Methods Over the past few months the authors have been meeting to identify obstacles to multi-site chart abstraction and to address ways in which multi-site chart abstraction processes can be systemized and standardized. The aim of this workgroup is to create a best practice guide for multi-site chart abstraction studies. Focus areas include: abstractor training, format for chart abstraction (database, paper, etc), data quality, redaction, mechanism for transferring data, site specific access to medical records, IRB/HIPAA concerns, and budgetary issues. Results The results of the workgroup’s efforts (the best practice guide) will be presented by a panel of experts at the 2012 HMORN conference. The presentation format will also focus on discussion among attendees to elicit further input and to identify areas that need to be further addressed. Subsequently, the best practice guide will be posted on the HMORN website. Discussion The best practice guide for multi-site chart abstraction studies will establish sound guidelines and serve as an aid to researchers embarking on multi-site chart abstraction studies. Efficiencies and data quality will be further enhanced with standardized multi-site chart abstraction practices.
Impairments in Tactile Search Following Superior Parietal Damage
ERIC Educational Resources Information Center
Skakoon-Sparling, Shayna P.; Vasquez, Brandon P.; Hano, Kate; Danckert, James
2011-01-01
The superior parietal cortex is critical for the control of visually guided actions. Research suggests that visual stimuli relevant to actions are preferentially processed when they are in peripersonal space. One recent study demonstrated that visually guided movements towards the body were more impaired in a patient with damage to superior…
Take-over performance in evasive manoeuvres.
Happee, Riender; Gold, Christian; Radlmayr, Jonas; Hergeth, Sebastian; Bengler, Klaus
2017-09-01
We investigated after effects of automation in take-over scenarios in a high-end moving-base driving simulator. Drivers performed evasive manoeuvres encountering a blocked lane in highway driving. We compared the performance of drivers 1) during manual driving, 2) after automated driving with eyes on the road while performing the cognitively demanding n-back task, and 3) after automated driving with eyes off the road performing the visually demanding SuRT task. Both minimum time to collision (TTC) and minimum clearance towards the obstacle disclosed a substantial number of near miss events and are regarded as valuable surrogate safety metrics in evasive manoeuvres. TTC proved highly sensitive to the applied definition of colliding paths, and we prefer robust solutions using lane position while disregarding heading. The extended time to collision (ETTC) which takes into account acceleration was close to the more robust conventional TTC. In line with other publications, the initial steering or braking intervention was delayed after using automation compared to manual driving. This resulted in lower TTC values and stronger steering and braking actions. Using automation, effects of cognitive distraction were similar to visual distraction for the intervention time with effects on the surrogate safety metric TTC being larger with visual distraction. However the precision of the evasive manoeuvres was hardly affected with a similar clearance towards the obstacle, similar overshoots and similar excursions to the hard shoulder. Further research is needed to validate and complement the current simulator based results with human behaviour in real world driving conditions. Experiments with real vehicles can disclose possible systematic differences in behaviour, and naturalistic data can serve to validate surrogate safety measures like TTC and obstacle clearance in evasive manoeuvres. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Scene perception and the visual control of travel direction in navigating wood ants
Collett, Thomas S.; Lent, David D.; Graham, Paul
2014-01-01
This review reflects a few of Mike Land's many and varied contributions to visual science. In it, we show for wood ants, as Mike has done for a variety of animals, including readers of this piece, what can be learnt from a detailed analysis of an animal's visually guided eye, head or body movements. In the case of wood ants, close examination of their body movements, as they follow visually guided routes, is starting to reveal how they perceive and respond to their visual world and negotiate a path within it. We describe first some of the mechanisms that underlie the visual control of their paths, emphasizing that vision is not the ant's only sense. In the second part, we discuss how remembered local shape-dependent and global shape-independent features of a visual scene may interact in guiding the ant's path. PMID:24395962
Anderson, Joe; Bingham, Geoffrey P
2010-09-01
We provide a solution to a major problem in visually guided reaching. Research has shown that binocular vision plays an important role in the online visual guidance of reaching, but the visual information and strategy used to guide a reach remains unknown. We propose a new theory of visual guidance of reaching including a new information variable, tau(alpha) (relative disparity tau), and a novel control strategy that allows actors to guide their reach trajectories visually by maintaining a constant proportion between tau(alpha) and its rate of change. The dynamical model couples the information to the reaching movement to generate trajectories characteristic of human reaching. We tested the theory in two experiments in which participants reached under conditions of darkness to guide a visible point either on a sliding apparatus or on their finger to a point-light target in depth. Slider apparatus controlled for a simple mapping from visual to proprioceptive space. When reaching with their finger, participants were forced, by perturbation of visual information used for feedforward control, to use online control with only binocular disparity-based information for guidance. Statistical analyses of trajectories strongly supported the theory. Simulations of the model were compared statistically to actual reaching trajectories. The results supported the theory, showing that tau(alpha) provides a source of information for the control of visually guided reaching and that participants use this information in a proportional rate control strategy.
Moving Stimuli Facilitate Synchronization But Not Temporal Perception
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419
Moving Stimuli Facilitate Synchronization But Not Temporal Perception.
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.
AMERICAN STANDARD GUIDE FOR SCHOOL LIGHTING.
ERIC Educational Resources Information Center
Illuminating Engineering Society, New York, NY.
THIS IS A GUIDE FOR SCHOOL LIGHTING, DESIGNED FOR EDUCATORS AS WELL AS ARCHITECTS. IT MAKES USE OF RECENT RESEARCH, NOTABLY THE BLACKWELL REPORT ON EVALUATION OF VISUAL TASKS. THE GUIDE BEGINS WITH AN OVERVIEW OF CHANGING GOALS AND NEEDS OF SCHOOL LIGHTING, AND A TABULATION OF COMMON CLASSROOM VISUAL TASKS THAT REQUIRE VARIATIONS IN LIGHTING.…
Lamti, Hachem A; Gorce, Philippe; Ben Khelifa, Mohamed Moncef; Alimi, Adel M
2016-12-01
The goal of this study is to investigate the influence of mental fatigue on the event related potential P300 features (maximum pick, minimum amplitude, latency and period) during virtual wheelchair navigation. For this purpose, an experimental environment was set up based on customizable environmental parameters (luminosity, number of obstacles and obstacles velocities). A correlation study between P300 and fatigue ratings was conducted. Finally, the best correlated features supplied three classification algorithms which are MLP (Multi Layer Perceptron), Linear Discriminate Analysis and Support Vector Machine. The results showed that the maximum feature over visual and temporal regions as well as period feature over frontal, fronto-central and visual regions were correlated with mental fatigue levels. In the other hand, minimum amplitude and latency features didn't show any correlation. Among classification techniques, MLP showed the best performance although the differences between classification techniques are minimal. Those findings can help us in order to design suitable mental fatigue based wheelchair control.
Lessons Learned From Google Glass: Telemedical Spark or Unfulfilled Promise?
Yu, Jonathan; Ferniany, William; Guthrie, Barton; Parekh, Selene G; Ponce, Brent
2016-04-01
Wearable devices such as Google Glass could potentially be used in the health care setting to expand access and improve quality of care. This study aims to assess the demographics of Google Glass users in health care and determine the obstacles to using Google Glass by surveying those who are known to use the device. A 48-question survey was designed to assess demographics of users, technological limitations of Google Glass, and obstacles to implementation of the device. The physicians surveyed worked in various fields of health care, with 50% of the respondents being surgeons. Potential participants were found using an Internet search for physicians using Google Glass in their practice. Outcome measures were divided into demographic information of users, technological limitations of the device, and administrative obstacles. A 43.6% response rate was observed. The majority of users were male, assistant professors, in academic hospitals, and in the United States. Numerous technological limitations were observed by the majority, including device ergonomics, display location, video quality, and audio quality. Patient confidentiality and data security were the major concerns among administrative obstacles. Despite the potential of Google Glass, numerous obstacles exist that limit its use in health care. While Google Glass has been discontinued, the results of this study may be used to guide future designs of wearable devices. © The Author(s) 2015.
Vrooijink, Gustaaf J.; Abayazid, Momen; Patil, Sachin; Alterovitz, Ron; Misra, Sarthak
2015-01-01
Needle insertion is commonly performed in minimally invasive medical procedures such as biopsy and radiation cancer treatment. During such procedures, accurate needle tip placement is critical for correct diagnosis or successful treatment. Accurate placement of the needle tip inside tissue is challenging, especially when the target moves and anatomical obstacles must be avoided. We develop a needle steering system capable of autonomously and accurately guiding a steerable needle using two-dimensional (2D) ultrasound images. The needle is steered to a moving target while avoiding moving obstacles in a three-dimensional (3D) non-static environment. Using a 2D ultrasound imaging device, our system accurately tracks the needle tip motion in 3D space in order to estimate the tip pose. The needle tip pose is used by a rapidly exploring random tree-based motion planner to compute a feasible needle path to the target. The motion planner is sufficiently fast such that replanning can be performed repeatedly in a closed-loop manner. This enables the system to correct for perturbations in needle motion, and movement in obstacle and target locations. Our needle steering experiments in a soft-tissue phantom achieves maximum targeting errors of 0.86 ± 0.35 mm (without obstacles) and 2.16 ± 0.88 mm (with a moving obstacle). PMID:26279600
Top-down contextual knowledge guides visual attention in infancy.
Tummeltshammer, Kristen; Amso, Dima
2017-10-26
The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.
Memory-guided saccade processing in visual form agnosia (patient DF).
Rossit, Stéphanie; Szymanek, Larissa; Butler, Stephen H; Harvey, Monika
2010-01-01
According to Milner and Goodale's model (The visual brain in action, Oxford University Press, Oxford, 2006) areas in the ventral visual stream mediate visual perception and oV-line actions, whilst regions in the dorsal visual stream mediate the on-line visual control of action. Strong evidence for this model comes from a patient (DF), who suffers from visual form agnosia after bilateral damage to the ventro-lateral occipital region, sparing V1. It has been reported that she is normal in immediate reaching and grasping, yet severely impaired when asked to perform delayed actions. Here we investigated whether this dissociation would extend to saccade execution. Neurophysiological studies and TMS work in humans have shown that the posterior parietal cortex (PPC), on the right in particular (supposedly spared in DF), is involved in the control of memory-guided saccades. Surprisingly though, we found that, just as reported for reaching and grasping, DF's saccadic accuracy was much reduced in the memory compared to the stimulus-guided condition. These data support the idea of a tight coupling of eye and hand movements and further suggest that dorsal stream structures may not be sufficient to drive memory-guided saccadic performance.
Animal Preparations to Assess Neurophysiological Effects of Bio-Dynamic Environments.
1980-07-17
deprivation in preventing the acquisition of visually-guided behaviors. The next study examined acquisition of visually-guided behaviors in six animals...Maffei, L. and Bisti, S. Binocular interaction in strabismic kittens deprived of vision. Science, 191, 579-580, 1976. Matin, L. A possible hybrid...function in cat visual cortex following prolonged deprivation . Exp. Brain Res., 25 (1976) 139-156. Hein, A. Visually controlled components of movement
Stimulation of the substantia nigra influences the specification of memory-guided saccades
Mahamed, Safraaz; Garrison, Tiffany J.; Shires, Joel
2013-01-01
In the absence of sensory information, we rely on past experience or memories to guide our actions. Because previous experimental and clinical reports implicate basal ganglia nuclei in the generation of movement in the absence of sensory stimuli, we ask here whether one output nucleus of the basal ganglia, the substantia nigra pars reticulata (nigra), influences the specification of an eye movement in the absence of sensory information to guide the movement. We manipulated the level of activity of neurons in the nigra by introducing electrical stimulation to the nigra at different time intervals while monkeys made saccades to different locations in two conditions: one in which the target location remained visible and a second in which the target location appeared only briefly, requiring information stored in memory to specify the movement. Electrical manipulation of the nigra occurring during the delay period of the task, when information about the target was maintained in memory, altered the direction and the occurrence of subsequent saccades. Stimulation during other intervals of the memory task or during the delay period of the visually guided saccade task had less effect on eye movements. On stimulated trials, and only when the visual stimulus was absent, monkeys occasionally (∼20% of the time) failed to make saccades. When monkeys made saccades in the absence of a visual stimulus, stimulation of the nigra resulted in a rotation of the endpoints ipsilaterally (∼2°) and increased the reaction time of contralaterally directed saccades. When the visual stimulus was present, stimulation of the nigra resulted in no significant rotation and decreased the reaction time of contralaterally directed saccades slightly. Based on these measurements, stimulation during the delay period of the memory-guided saccade task influenced the metrics of saccades much more than did stimulation during the same period of the visually guided saccade task. Because these effects occurred with manipulation of nigral activity well before the initiation of saccades and in trials in which the visual stimulus was absent, we conclude that information from the basal ganglia influences the specification of an action as it is evolving primarily during performance of memory-guided saccades. When visual information is available to guide the specification of the saccade, as occurs during visually guided saccades, basal ganglia information is less influential. PMID:24259551
Jolij, Jacob; Scholte, H Steven; van Gaal, Simon; Hodgson, Timothy L; Lamme, Victor A F
2011-12-01
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.
The effect of different brightness conditions on visually and memory guided saccades.
Felßberg, Anna-Maria; Dombrowe, Isabel
2018-01-01
It is commonly assumed that saccades in the dark are slower than saccades in a lit room. Early studies that investigated this issue using electrooculography (EOG) often compared memory guided saccades in darkness to visually guided saccades in an illuminated room. However, later studies showed that memory guided saccades are generally slower than visually guided saccades. Research on this topic is further complicated by the fact that the different existing eyetracking methods do not necessarily lead to consistent measurements. In the present study, we independently manipulated task (memory guided/visually guided) and screen brightness (dark, medium and light) in an otherwise completely dark room, and measured the peak velocity and the duration of the participant's saccades using a popular pupil-cornea reflection (p-cr) eyetracker (Eyelink 1000). Based on a critical reading of the literature, including a recent study using cornea-reflection (cr) eye tracking, we did not expect any velocity or duration differences between the three brightness conditions. We found that memory guided saccades were generally slower than visually guided saccades. In both tasks, eye movements on a medium and light background were equally fast and had similar durations. However, saccades on the dark background were slower and had shorter durations, even after we corrected for the effect of pupil size changes. This means that this is most likely an artifact of current pupil-based eye tracking. We conclude that the common assumption that saccades in the dark are slower than in the light is probably not true, however pupil-based eyetrackers tend to underestimate the peak velocity of saccades on very dark backgrounds, creating the impression that this might be the case. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng
2017-01-01
Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123
Efficacy of a Low Vision Patient Consultation
ERIC Educational Resources Information Center
Siemsen, Dennis W.; Bergstrom, A. Ren?e; Hathaway, Julie C.
2005-01-01
A variety of obstacles can prevent persons or individuals with low vision from deriving the greatest possible benefit from the rehabilitation process, including inadequate understanding of their visual impairment, lack of knowledge about available services, and misconceptions about low vision devices. This study explores the use of a…
Focus on Hinduism: Audio-Visual Resources for Teaching Religion. Occasional Publication No. 23.
ERIC Educational Resources Information Center
Dell, David; And Others
The guide presents annotated lists of audio and visual materials about the Hindu religion. The authors point out that Hinduism cannot be comprehended totally by reading books; thus the resources identified in this guide will enhance understanding based on reading. The guide is intended for use by high school and college students, teachers,…
Sensor-Based Electromagnetic Navigation (Mediguide®): How Accurate Is It? A Phantom Model Study.
Bourier, Felix; Reents, Tilko; Ammar-Busch, Sonia; Buiatti, Alessandra; Grebmer, Christian; Telishevska, Marta; Brkic, Amir; Semmler, Verena; Lennerz, Carsten; Kaess, Bernhard; Kottmaier, Marc; Kolb, Christof; Deisenhofer, Isabel; Hessling, Gabriele
2015-10-01
Data about localization reproducibility as well as spatial and visual accuracy of the new MediGuide® sensor-based electroanatomic navigation technology are scarce. We therefore sought to quantify these parameters based on phantom experiments. A realistic heart phantom was generated in a 3D-Printer. A CT scan was performed on the phantom. The phantom itself served as ground-truth reference to ensure exact and reproducible catheter placement. A MediGuide® catheter was repeatedly tagged at selected positions to assess accuracy of point localization. The catheter was also used to acquire a MediGuide®-scaled geometry in the EnSite Velocity® electroanatomic mapping system. The acquired geometries (MediGuide®-scaled and EnSite Velocity®-scaled) were compared to a CT segmentation of the phantom to quantify concordance. Distances between landmarks were measured in the EnSite Velocity®- and MediGuide®-scaled geometry and the CT dataset for Bland-Altman comparison. The visualization of virtual MediGuide® catheter tips was compared to their corresponding representation on fluoroscopic cine-loops. Point localization accuracy was 0.5 ± 0.3 mm for MediGuide® and 1.4 ± 0.7 mm for EnSite Velocity®. The 3D accuracy of the geometries was 1.1 ± 1.4 mm (MediGuide®-scaled) and 3.2 ± 1.6 mm (not MediGuide®-scaled). The offset between virtual MediGuide® catheter visualization and catheter representation on corresponding fluoroscopic cine-loops was 0.4 ± 0.1 mm. The MediGuide® system shows a very high level of accuracy regarding localization reproducibility as well as spatial and visual accuracy, which can be ascribed to the magnetic field localization technology. The observed offsets between the geometry visualization and the real phantom are below a clinically relevant threshold. © 2015 Wiley Periodicals, Inc.
Addressing Substandard Teaching in Schools: An Assessment of Principals' Enabling Behavior
ERIC Educational Resources Information Center
Gluck, Arlene
2013-01-01
Researchers estimate that 5-15% of American teachers do not produce the desired student performance. Nevertheless, 99% of U.S. teachers receive satisfactory evaluations annually. Guided by literature on obstacles to effective evaluation, this qualitative case study sought to understand the reasons why principals hesitate to confront substandard…
Taking Charge of Professional Development: A Practical Model for Your School
ERIC Educational Resources Information Center
Semadeni, Joseph
2009-01-01
Overcome budget cuts, lack of leadership, top-down mandates, and other obstacles to professional development by using this book's take-charge approach. Joseph H. Semadeni guides you through a systemic method to professional development that: (1) Motivates teachers to continuously learn and apply best practices; (2) Makes adult learning activities…
3 CFR 8570 - Proclamation 8570 of September 27, 2010. Family Day, 2010
Code of Federal Regulations, 2011 CFR
2011-01-01
... influences that can lead to dangerous decisions, such as abusing drugs and alcohol. When parents, loved ones... active parents and guardians play a critical role in keeping our children drug-free, and they can... Committed families shape and guide our children, preparing them for every obstacle they may encounter and...
Mythogeography Works: Performing Multiplicity on Queen Street
ERIC Educational Resources Information Center
Smith, Phil
2011-01-01
This paper considers the exploration of, and performance on, a single street in Exeter, UK, as guided by an idea of "mythogeography" and a determination to address a place as a multiplicity of meanings, objects, accretions, rhythms and exceptions. It explores the virtues of and obstacles facing a performance made "on the hoof"…
Early Exposure to & Preparation for College: A Guide for Educators
ERIC Educational Resources Information Center
Laing, Tony; Villavicencio, Adriana
2016-01-01
Black and Latino young men may face a number of barriers on their pathway to college, including a belief that college is not for them, difficulty navigating the college search and application process, financial obstacles, and insufficient academic preparation. Expanded Success Initiative (ESI) schools are working to prepare students for college…
From Teacher to Writer: How Does It Happen?
ERIC Educational Resources Information Center
Lewis, Barbara A.
1992-01-01
Discusses a sixth grade teacher's experience in turning a classroom project into a book. Offers advice for teachers on becoming writers. Explores obstacles, benefits, and suggestions on writing, editing, and publishing work. Describes a class project that resulted in the cleanup of a toxic waste site and the publication of "The Kid's Guide to…
Leadership and the Force of Love: Six Keys to Motivating with Love.
ERIC Educational Resources Information Center
Hoyle, John R.
Although educators are frequently faced with the challenges of politics, hostility, selfishness, and violence, this book demonstrates that these obstacles can be overcome through vision, teamwork, motivation, empowerment, and communication. By using love as a guiding force in the daily interactions with others, the way one conducts business is…
A Computer-Based Simulation for Teaching Heat Transfer across a Woody Stem
ERIC Educational Resources Information Center
Maixner, Michael R.; Noyd, Robert K.; Krueger, Jerome A.
2010-01-01
To assist student understanding of heat transfer through woody stems, we developed an instructional package that included an Excel-based, one-dimensional simulation model and a companion instructional worksheet. Guiding undergraduate botany students to applying principles of thermodynamics to plants in nature is fraught with two main obstacles:…
Visser, Maretha J; Mundell, Jonathan P
2008-07-01
HIV-infected women need support to deal with their diagnosis as well as with the stigma attached to HIV. As part of their practical training, Master's-level psychology students negotiated with the staff of four clinics in townships in Tshwane, South Africa, to establish support groups for HIV+ women and offered to assist them in facilitating the groups. This study aimed to understand why the implementation of groups was successful in one clinic and not other clinics. The student reports on their experiences and interaction with clinic staff and clients were used as sources of data. Using qualitative data analysis, different dynamics and factors that could affect project implementation were identified in each clinic. The socio-ecological and systems theories were used to understand implementation processes and obstacles in implementation. The metaphor of building a bridge over a gorge was used to describe the different phases in and obstacles to the implementation of the intervention. Valuable lessons were learnt, resulting in the development of guiding principles for the implementation of support groups in community settings.
Modeling the role of parallel processing in visual search.
Cave, K R; Wolfe, J M
1990-04-01
Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.
MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations
NASA Astrophysics Data System (ADS)
Vergara-Perez, Sandra; Marucho, Marcelo
2016-01-01
One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.
MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations
Vergara-Perez, Sandra; Marucho, Marcelo
2015-01-01
One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules. PMID:26924848
MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.
Vergara-Perez, Sandra; Marucho, Marcelo
2016-01-01
One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.
Consumer Control Points: Creating a Visual Food Safety Education Model for Consumers.
ERIC Educational Resources Information Center
Schiffman, Carole B.
Consumer education has always been a primary consideration in the prevention of food-borne illness. Using nutrition education and the new food guide as a model, this paper develops suggestions for a framework of microbiological food safety principles and a compatible visual model for communicating key concepts. Historically, visual food guides in…
Colorado Multicultural Resources for Arts Education: Dance, Music, Theatre, and Visual Art.
ERIC Educational Resources Information Center
Cassio, Charles J., Ed.
This Colorado resource guide is based on the premise that the arts (dance, music, theatre, and visual art) provide a natural arena for teaching multiculturalism to students of all ages. The guide provides information to Colorado schools about printed, disc, video, and audio tape visual prints, as well as about individuals and organizations that…
Rogers, Donna R B; Ei, Sue; Rogers, Kim R; Cross, Chad L
2007-05-01
This pilot study examines the use of guided visualizations that incorporate both cognitive and behavioral techniques with vibroacoustic therapy and cranial electrotherapy stimulation to form a multi-component therapeutic approach. This multi-component approach to cognitive-behavioral therapy (CBT) was used to treat patients presenting with a range of symptoms including anxiety, depression, and relationship difficulties. Clients completed a pre- and post-session symptom severity scale and CBT skills practice survey. The program consisted of 16 guided visualizations incorporating CBT techniques that were accompanied by vibroacoustic therapy and cranial electrotherapy stimulation. Significant reduction in symptom severity was observed in pre- and post-session scores for anxiety symptoms, relationship difficulties, and depressive symptoms. The majority of the clients (88%) reported use of CBT techniques learned in the guided visualizations at least once per week outside of the sessions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Measures for simulator evaluation of a helicopter obstacle avoidance system
NASA Technical Reports Server (NTRS)
Demaio, Joe; Sharkey, Thomas J.; Kennedy, David; Hughes, Micheal; Meade, Perry
1993-01-01
The U.S. Army Aeroflightdynamics Directorate (AFDD) has developed a high-fidelity, full-mission simulation facility for the demonstration and evaluation of advanced helicopter mission equipment. The Crew Station Research and Development Facility (CSRDF) provides the capability to conduct one- or two-crew full-mission simulations in a state-of-the-art helicopter simulator. The CSRDF provides a realistic, full field-of-regard visual environment with simulation of state-of-the-art weapons, sensors, and flight control systems. We are using the CSRDF to evaluate the ability of an obstacle avoidance system (OASYS) to support low altitude flight in cluttered terrain using night vision goggles (NVG). The OASYS uses a laser radar to locate obstacles to safe flight in the aircraft's flight path. A major concern is the detection of wires, which can be difficult to see with NVG, but other obstacles--such as trees, poles or the ground--are also a concern. The OASYS symbology is presented to the pilot on a head-up display mounted on the NVG (NVG-HUD). The NVG-HUD presents head-stabilized symbology to the pilot while allowing him to view the image intensified, out-the-window scene through the HUD. Since interference with viewing through the display is a major concern, OASYS symbology must be designed to present usable obstacle clearance information with a minimum of clutter.
NASA Astrophysics Data System (ADS)
Crawford, Bobby Grant
In an effort to field smaller and cheaper Uninhabited Aerial Vehicles (UAVs), the Army has expressed an interest in an ability of the vehicle to autonomously detect and avoid obstacles. Current systems are not suitable for small aircraft. NASA Langley Research Center has developed a vision sensing system that uses small semiconductor cameras. The feasibility of using this sensor for the purpose of autonomous obstacle avoidance by a UAV is the focus of the research presented in this document. The vision sensor characteristics are modeled and incorporated into guidance and control algorithms designed to generate flight commands based on obstacle information received from the sensor. The system is evaluated by simulating the response to these flight commands using a six degree-of-freedom, non-linear simulation of a small, fixed wing UAV. The simulation is written using the MATLAB application and runs on a PC. Simulations were conducted to test the longitudinal and lateral capabilities of the flight control for a range of airspeeds, camera characteristics, and wind speeds. Results indicate that the control system is suitable for obstacle avoiding flight control using the simulated vision system. In addition, a method for designing and evaluating the performance of such a system has been developed that allows the user to easily change component characteristics and evaluate new systems through simulation.
ERIC Educational Resources Information Center
Department of Justice, Washington, DC. Civil Rights Div.
This item consists of three separate "Technical Assistance Guides" combined into one document because they all are concerned with improving access to information for handicapped people. Specifically, the three guides provide: (1) information to enable hearing impaired, visually impaired, and mobility impaired persons to have access to public…
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J
2014-01-01
Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.
Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A
2011-01-01
Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-
Christiansen, Peter; Nielsen, Lars N; Steen, Kim A; Jørgensen, Rasmus N; Karstoft, Henrik
2016-11-11
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).
Christiansen, Peter; Nielsen, Lars N.; Steen, Kim A.; Jørgensen, Rasmus N.; Karstoft, Henrik
2016-01-01
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit). PMID:27845717
Contextual cueing: implicit learning and memory of visual context guides spatial attention.
Chun, M M; Jiang, Y
1998-06-01
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
How to Develop Children as Researchers: A Step-by-Step Guide to Teaching the Research Process
ERIC Educational Resources Information Center
Kellett, Mary
2005-01-01
The importance of research in professional and personal development is increasingly being acknowledged. So why should children not benefit in a similar way? Traditionally, children have been excluded from this learning process because research methodology is considered too difficult for them. Principal obstacles focus around three key barriers:…
A Hands-On Freshman Survey Course to Steer Undergraduates into Microsystems Coursework and Research
ERIC Educational Resources Information Center
Eddings, M. A.; Stephenson, J. C.; Harvey, I. R.
2009-01-01
Full class loads and inflexible schedules can be a significant obstacle in the implementation of freshman survey courses designed to guide engineering students into emerging research areas such as micro- and nanosystems. A hands-on, interactive course was developed to excite freshmen early in their engineering program to pursue research and…
ERIC Educational Resources Information Center
Jones, Mark T.; Eick, Charles J.
2007-01-01
Two elementary certified middle school science teachers are studied for changes in practical knowledge supporting the implementation of kit-based inquiry as part of a schoolwide reform effort. Emphasis is placed on studying how these two pilot teachers enact guided inquiry within their unique pedagogical and curricular interests, and what…
Building Literacy in Social Studies: Strategies for Improving Comprehension and Critical Thinking
ERIC Educational Resources Information Center
Klemp, Ron; McBride, Bill; Ogle, Donna
2007-01-01
It's tough to teach social studies and history to students who have trouble reading and understanding textbooks and other resources. But you can overcome those obstacles and motivate students to excel in social studies classes by using the concepts and research-based techniques in this guide. Renowned reading expert Donna Ogle teams up with two…
A new neural framework for visuospatial processing.
Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Mishkin, Mortimer
2011-04-01
The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a 'What' pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception ('Where'), more recent accounts suggest it primarily serves non-conscious visually guided action ('How'). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively.
Visually Impaired: Curriculum Guide.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton.
The curriculum guide provides guidelines for developing academic and living vocational skills in visually handicapped students from preschool to adolescence. The document, divided into two sections, outlines objectives, teaching strategies, and materials for each skill area. Section 1 covers the following academic skills: communication,…
Grasping with the eyes of your hands: hapsis and vision modulate hand preference.
Stone, Kayla D; Gonzalez, Claudia L R
2014-02-01
Right-hand preference has been demonstrated for visually guided reaching and grasping. Grasping, however, requires the integration of both visual and haptic cues. To what extent does vision influence hand preference for grasping? Is there a hand preference for haptically guided grasping? Two experiments were designed to address these questions. In Experiment 1, individuals were tested in a reaching-to-grasp task with vision (sighted condition) and with hapsis (blindfolded condition). Participants were asked to put together 3D models using building blocks scattered on a tabletop. The models were simple, composed of ten blocks of three different shapes. Starting condition (Vision-First or Hapsis-First) was counterbalanced among participants. Right-hand preference was greater in visually guided grasping but only in the Vision-First group. Participants who initially built the models while blindfolded (Hapsis-First group) used their right hand significantly less for the visually guided portion of the task. To investigate whether grasping using hapsis modifies subsequent hand preference, participants received an additional haptic experience in a follow-up experiment. While blindfolded, participants manipulated the blocks in a container for 5 min prior to the task. This additional experience did not affect right-hand use on visually guided grasping but had a robust effect on haptically guided grasping. Together, the results demonstrate first that hand preference for grasping is influenced by both vision and hapsis, and second, they highlight how flexible this preference could be when modulated by hapsis.
SGM-based seamline determination for urban orthophoto mosaicking
NASA Astrophysics Data System (ADS)
Pang, Shiyan; Sun, Mingwei; Hu, Xiangyun; Zhang, Zuxun
2016-02-01
Mosaicking is a key step in the production of digital orthophoto maps (DOMs), especially for large-scale urban orthophotos. During this step, manual intervention is commonly involved to avoid the case where the seamline crosses obvious objects (e.g., buildings), which causes geometric discontinuities on the DOMs. How to guide the seamline to avoid crossing obvious objects has become a popular topic in the field of photogrammetry and remote sensing. Thus, a new semi-global matching (SGM)-based method to guide seamline determination is proposed for urban orthophoto mosaicking in this study, which can greatly eliminate geometric discontinuities. The approximate epipolar geometry of the orthophoto pairs is first derived and proven, and the approximate epipolar image pair is then generated by rotating the two orthorectified images according to the parallax direction. A SGM algorithm is applied to their overlaps to obtain the corresponding pixel-wise disparity. According to a predefined disparity threshold, the overlap area is identified as the obstacle and non-obstacle areas. For the non-obstacle regions, Hilditch thinning algorithm is used to obtain the skeleton line, followed by Dijkstra's algorithm to search for the optimal path on the skeleton network as the seamline between two orthophotos. A whole seamline network is constructed based on the strip information recorded in flight. In the experimental section, the approximate epipolar geometric theory of the orthophoto is first analyzed and verified, and the effectiveness of the proposed method is then validated by comparing its results with the results of the geometry-based, OrthoVista, and orthoimage elevation synchronous model (OESM)-based methods.
Optic flow-based collision-free strategies: From insects to robots.
Serres, Julien R; Ruffier, Franck
2017-09-01
Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Krauzlis, R. J.; Stone, L. S.
1999-01-01
The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.
Memory-guided reaching in a patient with visual hemiagnosia.
Cornelsen, Sonja; Rennig, Johannes; Himmelbach, Marc
2016-06-01
The two-visual-systems hypothesis (TVSH) postulates that memory-guided movements rely on intact functions of the ventral stream. Its particular importance for memory-guided actions was initially inferred from behavioral dissociations in the well-known patient DF. Despite of rather accurate reaching and grasping movements to visible targets, she demonstrated grossly impaired memory-guided grasping as much as impaired memory-guided reaching. These dissociations were later complemented by apparently reversed dissociations in patients with dorsal damage and optic ataxia. However, grasping studies in DF and optic ataxia patients differed with respect to the retinotopic position of target objects, questioning the interpretation of the respective findings as a double dissociation. In contrast, the findings for reaching errors in both types of patients came from similar peripheral target presentations. However, new data on brain structural changes and visuomotor deficits in DF also questioned the validity of a double dissociation in reaching. A severe visuospatial short-term memory deficit in DF further questioned the specificity of her memory-guided reaching deficit. Therefore, we compared movement accuracy in visually-guided and memory-guided reaching in a new patient who suffered a confined unilateral damage to the ventral visual system due to stroke. Our results indeed support previous descriptions of memory-guided movements' inaccuracies in DF. Furthermore, our data suggest that recently discovered optic-ataxia like misreaching in DF is most likely caused by her parieto-occipital and not by her ventral stream damage. Finally, multiple visuospatial memory measurements in HWS suggest that inaccuracies in memory-guided reaching tasks in patients with ventral damage cannot be explained by visuospatial short-term memory or perceptual deficits, but by a specific deficit in visuomotor processing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shade matching assisted by digital photography and computer software.
Schropp, Lars
2009-04-01
To evaluate the efficacy of digital photographs and graphic computer software for color matching compared to conventional visual matching. The shade of a tab from a shade guide (Vita 3D-Master Guide) placed in a phantom head was matched to a second guide of the same type by nine observers. This was done for twelve selected shade tabs (tests). The shade-matching procedure was performed visually in a simulated clinic environment and with digital photographs, and the time spent for both procedures was recorded. An alternative arrangement of the shade tabs was used in the digital photographs. In addition, a graphic software program was used for color analysis. Hue, chroma, and lightness values of the test tab and all tabs of the second guide were derived from the digital photographs. According to the CIE L*C*h* color system, the color differences between the test tab and tabs of the second guide were calculated. The shade guide tab that deviated least from the test tab was determined to be the match. Shade matching performance by means of graphic software was compared with the two visual methods and tested by Chi-square tests (alpha= 0.05). Eight of twelve test tabs (67%) were matched correctly by the computer software method. This was significantly better (p < 0.02) than the performance of the visual shade matching methods conducted in the simulated clinic (32% correct match) and with photographs (28% correct match). No correlation between time consumption for the visual shade matching methods and frequency of correct match was observed. Shade matching assisted by digital photographs and computer software was significantly more reliable than by conventional visual methods.
It's Time the Locker Got a Facelift
ERIC Educational Resources Information Center
Schneider, Tod
2009-01-01
Lockers are often begrudging investments, scraped from the bottom of the budget barrel. This is unfortunate for a number of reasons, one of which is that they often serve as the internal face of the school: endless, grim sentries lining mile-long halls. Alternately, they may be entombed, a catacomb of visual obstacles stuffed into independent…
Coming from outside the Academy. Values and 2.0 Culture in Higher Education
ERIC Educational Resources Information Center
Serrat, Nuria; Rubio, Anna
2012-01-01
This article reflects on how some values, interests, and particularities of 2.0 culture enter on higher and postgraduate education institutions. Through the identification of the features of 2.0, this document visualizes some of the resistances, obstacles, possibilities, and opportunities detected in these institutions, many of them focusing on…
Useful Effect Size Interpretations for Single Case Research
ERIC Educational Resources Information Center
Parker, Richard I.; Hagan-Burke, Shanna
2007-01-01
An obstacle to broader acceptability of effect sizes in single case research is their lack of intuitive and useful interpretations. Interpreting Cohen's d as "standard deviation units difference" and R[superscript 2] as "percent of variance accounted for" do not resound with most visual analysts. In fact, the only comparative analysis widely…
Taking on the Heat--A Narrative Account of How Infrared Cameras Invite Instant Inquiry
ERIC Educational Resources Information Center
Haglund, Jesper; Jeppsson, Fredrik; Schönborn, Konrad J.
2016-01-01
Integration of technology, social learning and scientific models offers pedagogical opportunities for science education. A particularly interesting area is thermal science, where students often struggle with abstract concepts, such as heat. In taking on this conceptual obstacle, we explore how hand-held infrared (IR) visualization technology can…
ERIC Educational Resources Information Center
American Foundation for the Blind, New York, NY.
RESEARCH STUDIES RELEVANT TO SOME ASPECT OF VISUAL IMPAIRMENT ARE CONTAINED IN THE BULLETIN. D.M. BAUMANN AND OTHERS REPORT ON THE COLLAPSIBLE CAN, AND T.L. DEFAZIO AND T.B. SHERIDAN ANALYZE THE VIBRATIONS OF THE CANE. J.J. GIBSON CONSIDERS SENSITIVITY, AND J. MICKUNAS, JR., AND T.B. SHERIDAN DISCUSS THE OBSTACLE COURSE IN MOBILITY EVALUATION OF…
Adolescents with Low Vision: Perceptions of Driving and Nondriving
ERIC Educational Resources Information Center
Sacks, Sharon Zell; Rosenblum, L. Penny
2006-01-01
Two studies examined how adolescents with low vision perceive their ability to drive. The results of both studies indicated similarities in the participants' responses with respect to knowledge of visual impairment, information about options for driving with low vision, frustrations and obstacles imposed by not being able to drive, and independent…
A Haptic Glove as a Tactile-Vision Sensory Substitution for Wayfinding.
ERIC Educational Resources Information Center
Zelek, John S.; Bromley, Sam; Asmar, Daniel; Thompson, David
2003-01-01
A device that relays navigational information using a portable tactile glove and a wearable computer and camera system was tested with nine adults with visual impairments. Paths traversed by subjects negotiating an obstacle course were not qualitatively different from paths produced with existing wayfinding devices and hitting probabilities were…
Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori
2017-01-01
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from ‘driver-lost’ scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results. PMID:28809803
Ravankar, Abhijeet; Ravankar, Ankit A; Kobayashi, Yukinori; Emaru, Takanori
2017-08-15
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
A new neural framework for visuospatial processing
Kravitz, Dwight J.; Saleem, Kadharbatcha S.; Baker, Chris I.; Mishkin, Mortimer
2012-01-01
The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a ‘What’ pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception (‘Where’), more recent accounts suggest it primarily serves non-conscious visually guided action (‘How’). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively. PMID:21415848
Ultrasound imaging in medical student education: Impact on learning anatomy and physical diagnosis.
So, Sokpoleak; Patel, Rita M; Orebaugh, Steven L
2017-03-01
Ultrasound use has expanded dramatically among the medical specialties for diagnostic and interventional purposes, due to its affordability, portability, and practicality. This imaging modality, which permits real-time visualization of anatomic structures and relationships in vivo, holds potential for pre-clinical instruction of students in anatomy and physical diagnosis, as well as providing a bridge to the eventual use of bedside ultrasound by clinicians to assess patients and guide invasive procedures. In many studies, but not all, improved understanding of anatomy has been demonstrated, and in others, improved accuracy in selected aspects of physical diagnosis is evident. Most students have expressed a highly favorable impression of this technology for anatomy education when surveyed. Logistic issues or obstacles to the integration of ultrasound imaging into anatomy teaching appear to be readily overcome. The enthusiasm of students and anatomists for teaching with ultrasound has led to widespread implementation of ultrasound-based teaching initiatives in medical schools the world over, including some with integration throughout the entire curriculum; a trend that likely will continue to grow. Anat Sci Educ 10: 176-189. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.
Laser radar system for obstacle avoidance
NASA Astrophysics Data System (ADS)
Bers, Karlheinz; Schulz, Karl R.; Armbruster, Walter
2005-09-01
The threat of hostile surveillance and weapon systems require military aircraft to fly under extreme conditions such as low altitude, high speed, poor visibility and incomplete terrain information. The probability of collision with natural and man-made obstacles during such contour missions is high if detection capability is restricted to conventional vision aids. Forward-looking scanning laser radars which are build by the EADS company and presently being flight tested and evaluated at German proving grounds, provide a possible solution, having a large field of view, high angular and range resolution, a high pulse repetition rate, and sufficient pulse energy to register returns from objects at distances of military relevance with a high hit-and-detect probability. The development of advanced 3d-scene analysis algorithms had increased the recognition probability and reduced the false alarm rate by using more readily recognizable objects such as terrain, poles, pylons, trees, etc. to generate a parametric description of the terrain surface as well as the class, position, orientation, size and shape of all objects in the scene. The sensor system and the implemented algorithms can be used for other applications such as terrain following, autonomous obstacle avoidance, and automatic target recognition. This paper describes different 3D-imaging ladar sensors with unique system architecture but different components matched for different military application. Emphasis is laid on an obstacle warning system with a high probability of detection of thin wires, the real time processing of the measured range image data, obstacle classification und visualization.
NASA Astrophysics Data System (ADS)
Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling
2017-09-01
In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters
Commercial Art I and Commercial Art II: An Instructional Guide.
ERIC Educational Resources Information Center
Montgomery County Public Schools, Rockville, MD.
A teacher's guide for two sequential one-year commercial art courses for high school students is presented. Commercial Art I contains three units: visual communication, product design, and environmental design. Students study visual communication by analyzing advertising techniques, practicing fundamental drawing and layout techniques, creating…
Understanding Language Use in the Classroom: A Linguistic Guide for College Educators
ERIC Educational Resources Information Center
Behrens, Susan J.
2014-01-01
It is clear that a proper understanding of what academic English is and how to use it is crucial for success in college, and yet students face multiple obstacles in acquiring this new "code", not least that their professors often cannot agree amongst themselves on a definition and a set of rules. "Understanding Language Use in the…
Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn
2017-05-01
The ability to adapt walking to environmental circumstances is an important aspect of walking, yet difficult to assess. The Interactive Walkway was developed to assess walking adaptability by augmenting a multi-Kinect-v2 10-m walkway with gait-dependent visual context (stepping targets, obstacles) using real-time processed markerless full-body kinematics. In this study we determined Interactive Walkway's usability for walking-adaptability assessments in terms of between-systems agreement and sensitivity to task and subject variations. Under varying task constraints, 21 healthy subjects performed obstacle-avoidance, sudden-stops-and-starts and goal-directed-stepping tasks. Various continuous walking-adaptability outcome measures were concurrently determined with the Interactive Walkway and a gold-standard motion-registration system: available response time, obstacle-avoidance and sudden-stop margins, step length, stepping accuracy and walking speed. The same holds for dichotomous classifications of success and failure for obstacle-avoidance and sudden-stops tasks and performed short-stride versus long-stride obstacle-avoidance strategies. Continuous walking-adaptability outcome measures generally agreed well between systems (high intraclass correlation coefficients for absolute agreement, low biases and narrow limits of agreement) and were highly sensitive to task and subject variations. Success and failure ratings varied with available response times and obstacle types and agreed between systems for 85-96% of the trials while obstacle-avoidance strategies were always classified correctly. We conclude that Interactive Walkway walking-adaptability outcome measures are reliable and sensitive to task and subject variations, even in high-functioning subjects. We therefore deem Interactive Walkway walking-adaptability assessments usable for obtaining an objective and more task-specific examination of one's ability to walk, which may be feasible for both high-functioning and fragile populations since walking adaptability can be assessed at various levels of difficulty. Copyright © 2017 Elsevier B.V. All rights reserved.
Creating Visuals for TV; A Guide for Educators.
ERIC Educational Resources Information Center
Spear, James
There are countless ways educators can improve the quality of their educational television offerings. The Guide, planned especially for the television teacher or audiovisual director, particularly those approaching the television medium for the first time, is designed to acquaint the reader with production techniques for effective visuals to…
City: Images of America. Elementary Version.
ERIC Educational Resources Information Center
Franklin, Edward; And Others
Designed to accompany an audiovisual filmstrip series devoted to presenting a visual history of life in America, this guide contains an elementary social studies (grades 2-6) unit on the American city over the last century. Using authentic visuals including paintings, posters, advertising, documentary photography, and cartoons, the guide offers…
Learning to Verbally & Visually Communicate the Metalworking Way.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento. Div. of Vocational Education.
This curriculum guide, one of 15 volumes written for field test use with educationally disadvantaged industrial education students needing additional instruction in the basic skill areas, deals with helping students develop basic verbal and visual communication skills while studying metalworking. Addressed in the individual units of the guide are…
Coronary angioscopy: a monorail angioscope with movable guide wire.
Nanto, S; Ohara, T; Mishima, M; Hirayama, A; Komamura, K; Matsumura, Y; Kodama, K
1991-03-01
A new angioscope was devised for easier visualization of the coronary artery. In its tip, the angioscope (Olympus) with an outer diameter of 0.8 mm had a metal lumen, through which a 0.014-in steerable guide wire passed. Using a 8F guiding catheter and a guide wire, it was introduced into the distal coronary artery. With injection of warmed saline through the guiding catheter, the coronary segments were visualized. In the attempted 70 vessels (32 left anterior descending [LAD], 10 right coronary [RCA], 28 left circumflex [LCX]) from 48 patients, 60 vessels (86%) were successfully examined. Twenty-two patients who underwent attempted examination of both LAD and LCX; both coronary arteries were visualized in 19 patients (86%). In the proximal site of the lesion, 40 patients have the diagonal branch or the obtuse marginal branch. In 34 patients (85%) the angioscope was inserted beyond these branches. In 12 very tortuous vessels, eight vessels (67%) were examined. In conclusion, the new monorail coronary angioscope with movable guide wire is useful to examine the stenotic lesions of the coronary artery.
NASA Astrophysics Data System (ADS)
Zapf, Marc Patrick H.; Boon, Mei-Ying; Matteucci, Paul B.; Lovell, Nigel H.; Suaning, Gregg J.
2015-06-01
Objective. The prospective efficacy of a future peripheral retinal prosthesis complementing residual vision to raise mobility performance in non-end stage retinitis pigmentosa (RP) was evaluated using simulated prosthetic vision (SPV). Approach. Normally sighted volunteers were fitted with a wide-angle head-mounted display and carried out mobility tasks in photorealistic virtual pedestrian scenarios. Circumvention of low-lying obstacles, path following, and navigating around static and moving pedestrians were performed either with central simulated residual vision of 10° alone or enhanced by assistive SPV in the lower and lateral peripheral visual field (VF). Three layouts of assistive vision corresponding to hypothetical electrode array layouts were compared, emphasizing higher visual acuity, a wider visual angle, or eccentricity-dependent acuity across an intermediate angle. Movement speed, task time, distance walked and collisions with the environment were analysed as performance measures. Main results. Circumvention of low-lying obstacles was improved with all tested configurations of assistive SPV. Higher-acuity assistive vision allowed for greatest improvement in walking speeds—14% above that of plain residual vision, while only wide-angle and eccentricity-dependent vision significantly reduced the number of collisions—both by 21%. Navigating around pedestrians, there were significant reductions in collisions with static pedestrians by 33% and task time by 7.7% with the higher-acuity layout. Following a path, higher-acuity assistive vision increased walking speed by 9%, and decreased collisions with stationary cars by 18%. Significance. The ability of assistive peripheral prosthetic vision to improve mobility performance in persons with constricted VFs has been demonstrated. In a prospective peripheral visual prosthesis, electrode array designs need to be carefully tailored to the scope of tasks in which a device aims to assist. We posit that maximum benefit might come from application alongside existing visual aids, to further raise life quality of persons living through the prolonged early stages of RP.
Intraoperative positioning of the hindfoot with the hindfoot alignment guide: a pilot study.
Frigg, Arno; Jud, Lukas; Valderrabano, Victor
2014-01-01
In a previous study, intraoperative positioning of the hindfoot by visual means resulted in the wrong varus/valgus position by 8 degrees and a relatively large standard deviation of 8 degrees. Thus, new intraoperative means are needed to improve the precision of hindfoot surgery. We therefore sought a hindfoot alignment guide that would be as simple as the alignment guides used in total knee arthroplasty. A novel hindfoot alignment guide (HA guide) has been developed that projects the mechanical axis from the tibia down to the heel. The HA guide enables the positioning of the hindfoot in the desired varus/valgus position and in plantigrade position in the lateral plane. The HA guide was used intraoperatively from May through November 2011 in 11 complex patients with simultaneous correction of the supramalleolar, tibiotalar, and inframalleolar alignment. Pre- and postoperative Saltzman views were taken and the position was measured. The HA guide significantly improved the intraoperative positioning compared with visual means: The accuracy with the HA guide was 4.5 ± 5.1 degrees (mean ± standard deviation) and without the HA guide 9.4 ± 5.5 degrees (P < .05). In 7 of 11 patients, the preoperative plan was changed because of the HA guide (2 avoided osteotomies, 5 additional osteotomies). The HA guide helped to position the hindfoot intraoperatively with greater precision than visual means. The HA guide was especially useful for multilevel corrections in which the need for and the amount of a simultaneous osteotomy had to be evaluated intraoperatively. Level IV, case series.
Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.
2013-01-01
Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512
Optimal path planning for video-guided smart munitions via multitarget tracking
NASA Astrophysics Data System (ADS)
Borkowski, Jeffrey M.; Vasquez, Juan R.
2006-05-01
An advent in the development of smart munitions entails autonomously modifying target selection during flight in order to maximize the value of the target being destroyed. A unique guidance law can be constructed that exploits both attribute and kinematic data obtained from an onboard video sensor. An optimal path planning algorithm has been developed with the goals of obstacle avoidance and maximizing the value of the target impacted by the munition. Target identification and classification provides a basis for target value which is used in conjunction with multi-target tracks to determine an optimal waypoint for the munition. A dynamically feasible trajectory is computed to provide constraints on the waypoint selection. Results demonstrate the ability of the autonomous system to avoid moving obstacles and revise target selection in flight.
Visual cortex activation in kinesthetic guidance of reaching.
Darling, W G; Seitz, R J; Peltier, S; Tellmann, L; Butler, A J
2007-06-01
The purpose of this research was to determine the cortical circuit involved in encoding and controlling kinesthetically guided reaching movements. We used (15)O-butanol positron emission tomography in ten blindfolded able-bodied volunteers in a factorial experiment in which arm (left/right) used to encode target location and to reach back to the remembered location and hemispace of target location (left/right side of midsagittal plane) varied systematically. During encoding of a target the experimenter guided the hand to touch the index fingertip to an external target and then returned the hand to the start location. After a short delay the subject voluntarily moved the same hand back to the remembered target location. SPM99 analysis of the PET data contrasting left versus right hand reaching showed increased (P < 0.05, corrected) neural activity in the sensorimotor cortex, premotor cortex and posterior parietal lobule (PPL) contralateral to the moving hand. Additional neural activation was observed in prefrontal cortex and visual association areas of occipital and parietal lobes contralateral and ipsilateral to the reaching hand. There was no statistically significant effect of target location in left versus right hemispace nor was there an interaction of hand and hemispace effects. Structural equation modeling showed that parietal lobe visual association areas contributed to kinesthetic processing by both hands but occipital lobe visual areas contributed only during dominant hand kinesthetic processing. This visual processing may also involve visualization of kinesthetically guided target location and use of the same network employed to guide reaches to visual targets when reaching to kinesthetic targets. The present work clearly demonstrates a network for kinesthetic processing that includes higher visual processing areas in the PPL for both upper limbs and processing in occipital lobe visual areas for the dominant limb.
A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes
ERIC Educational Resources Information Center
Browning, N. Andrew; Grossberg, Stephen; Mingolla, Ennio
2009-01-01
Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard…
The Central Role of Expectations in Communication and Literacy Success: A Parent Perspective
ERIC Educational Resources Information Center
Mintun, Bonnie
2005-01-01
The author chronicles the search for augmentative and alternative communication (AAC) technology for her daughter Anna, who is now age 21. Though Anna has severe cognitive, visual and orthopedic disabilities, a more significant obstacle to finding a functional AAC system has been low expectations of her capability. Because Anna could not perform…
Youth and the City: Reflective Photography as a Tool of Urban Voice
ERIC Educational Resources Information Center
Gerodimos, Roman
2018-01-01
Young people's engagement with urban public space has been facing a number of obstacles that reflect a lack of understanding of their needs, values and priorities. The emergence of digital devices and social media as integral elements of youth culture adds further urgency to the need to understand how young people themselves visually articulate…
Hand Path Priming in Manual Obstacle Avoidance: Rapid Decay of Dorsal Stream Information
ERIC Educational Resources Information Center
Jax, Steven A.; Rosenbaum, David A.
2009-01-01
The dorsal, action-related, visual stream has been thought to have little or no memory. This hypothesis has seemed credible because functions related to the dorsal stream have been generally unsusceptible to priming from previous experience. Tests of this claim have yielded inconsistent results, however. We argue that these inconsistencies may be…
ERIC Educational Resources Information Center
Corder, Greg
2005-01-01
Science teachers face challenges that affect the quality of instruction. Tight budgets, limited resources, school schedules, and other obstacles limit students' opportunities to experience science that is visual and interactive. Incorporating web-based Java applets into science instruction offers a practical solution to these challenges. The…
What and where information in the caudate tail guides saccades to visual objects
Yamamoto, Shinya; Monosov, Ilya E.; Yasuda, Masaharu; Hikosaka, Okihide
2012-01-01
We understand the world by making saccadic eye movements to various objects. However, it is unclear how a saccade can be aimed at a particular object, because two kinds of visual information, what the object is and where it is, are processed separately in the dorsal and ventral visual cortical pathways. Here we provide evidence suggesting that a basal ganglia circuit through the tail of the monkey caudate nucleus (CDt) guides such object-directed saccades. First, many CDt neurons responded to visual objects depending on where and what the objects were. Second, electrical stimulation in the CDt induced saccades whose directions matched the preferred directions of neurons at the stimulation site. Third, many CDt neurons increased their activity before saccades directed to the neurons’ preferred objects and directions in a free-viewing condition. Our results suggest that CDt neurons receive both ‘what’ and ‘where’ information and guide saccades to visual objects. PMID:22875934
ERIC Educational Resources Information Center
Rossetto, Marietta; Chiera-Macchia, Antonella
2011-01-01
This study investigated the use of comics (Cary, 2004) in a guided writing experience in secondary school Italian language learning. The main focus of the peer group interaction task included the exploration of visual sequencing and visual integration (Bailey, O'Grady-Jones, & McGown, 1995) using image and text to create a comic strip narrative in…
Tight coordination of aerial flight maneuvers and sonar call production in insectivorous bats
Falk, Benjamin; Kasnadi, Joseph; Moss, Cynthia F.
2015-01-01
ABSTRACT Echolocating bats face the challenge of coordinating flight kinematics with the production of echolocation signals used to guide navigation. Previous studies of bat flight have focused on kinematics of fruit and nectar-feeding bats, often in wind tunnels with limited maneuvering, and without analysis of echolocation behavior. In this study, we engaged insectivorous big brown bats in a task requiring simultaneous turning and climbing flight, and used synchronized high-speed motion-tracking cameras and audio recordings to quantify the animals' coordination of wing kinematics and echolocation. Bats varied flight speed, turn rate, climb rate and wingbeat rate as they navigated around obstacles, and they adapted their sonar signals in patterning, duration and frequency in relation to the timing of flight maneuvers. We found that bats timed the emission of sonar calls with the upstroke phase of the wingbeat cycle in straight flight, and that this relationship changed when bats turned to navigate obstacles. We also characterized the unsteadiness of climbing and turning flight, as well as the relationship between speed and kinematic parameters. Adaptations in the bats' echolocation call frequency suggest changes in beam width and sonar field of view in relation to obstacles and flight behavior. By characterizing flight and sonar behaviors in an insectivorous bat species, we find evidence of exquisitely tight coordination of sensory and motor systems for obstacle navigation and insect capture. PMID:26582935
Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.
2011-01-01
Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods:DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing∕registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. PMID:21361176
Kodjebacheva, Gergana Damianova; Maliski, Sally; Coleman, Anne L
2015-01-01
To investigate the perceptions, behaviors, and recommendations that parents, school nurses, and teachers have regarding children's use of eyeglasses. Focus groups with parents, school nurses, and teachers were conducted. The study took place in one Southern California school district. There were 39 participants, including 24 parents, seven school nurses, and eight teachers. An experienced moderator guided the focus group discussions. Transcripts were analyzed using grounded theory techniques. Participants perceive visual impairment as a serious problem in the development of children. The lack of eyeglasses may lead to problems such as tiredness, headaches, inability to focus on school work, and decreased reading speed. Participants experienced disappointment, unhappiness, worry, and concern when they realized they needed eyeglasses at a young age. Negative societal perceptions toward eyeglasses, lack of eye doctors in minority communities, parental perceptions that children do not need eyeglasses, and peer bullying of children wearing eyeglasses are key obstacles to children's use of eyeglasses. Participants suggest school and national campaigns featuring respected public figures who wear eyeglasses to promote positive attitudes toward eyeglasses. Parents and teachers who closely follow the academic development of children have observed that visual impairment has negative consequences for the scholastic achievement of children. They recommend interventions to promote the attractiveness of eyeglasses in society. The participants discuss the need for a national preventative message for eye care similar to the message for dental care. The public health message should emphasize the importance of embracing and respecting differences among individuals.
ERIC Educational Resources Information Center
Umansky, Warren; And Others
The guide offers a means for evaluating specific learning characteristics of visually impaired children at three levels: prereadiness (prekindergarten), readiness (kindergarten), and academic (primary grades). Items are designed to be administered by informal observation and structured testing. Score sheets contain space for reporting two testing…
Food: Images of America. Social Studies Unit, Elementary Grades 2-6.
ERIC Educational Resources Information Center
Franklin, Edward; And Others
Designed to accompany an audiovisual filmstrip series devoted to presenting a visual history of life in America, this guide contains an elementary school (grades 2-6) unit on American food over the last century. Using authentic visuals including paintings, advertising, label art, documentary photography, and a movie still, the guide offers…
An Annotated Guide to Audio-Visual Materials for Teaching Shakespeare.
ERIC Educational Resources Information Center
Albert, Richard N.
Audio-visual materials, found in a variety of periodicals, catalogs, and reference works, are listed in this guide to expedite the process of finding appropriate classroom materials for a study of William Shakespeare in the classroom. Separate listings of films, filmstrips, and recordings are provided, with subdivisions for "The Plays"…
The Computer: An Art Tool for the Visually Gifted. A Curriculum Guide.
ERIC Educational Resources Information Center
Suter, Thomas E.; Bibbey, Melissa R.
This curriculum guide, developed and used in Wheelersburg (Ohio) with visually talented students, shows how such students can be taught to utilize computers as an art medium and tool. An initial section covers program implementation including setup, class structure and scheduling, teaching strategies, and housecleaning and maintenance. Seventeen…
Sáles, Christopher S; Manche, Edward E
2014-01-01
Background To compare wavefront (WF)-guided and WF-optimized laser in situ keratomileusis (LASIK) in hyperopes with respect to the parameters of safety, efficacy, predictability, refractive error, uncorrected distance visual acuity, corrected distance visual acuity, contrast sensitivity, and higher order aberrations. Methods Twenty-two eyes of eleven participants with hyperopia with or without astigmatism were prospectively randomized to receive WF-guided LASIK with the VISX CustomVue S4 IR or WF-optimized LASIK with the WaveLight Allegretto Eye-Q 400 Hz. LASIK flaps were created using the 150-kHz IntraLase iFS. Evaluations included measurement of uncorrected distance visual acuity, corrected distance visual acuity, <5% and <25% contrast sensitivity, and WF aberrometry. Patients also completed a questionnaire detailing symptoms on a quantitative grading scale. Results There were no statistically significant differences between the groups for any of the variables studied after 12 months of follow-up (all P>0.05). Conclusion This comparative case series of 11 subjects with hyperopia showed that WF-guided and WF-optimized LASIK had similar clinical outcomes at 12 months. PMID:25419115
Transient visual pathway critical for normal development of primate grasping behavior.
Mundinano, Inaki-Carril; Fox, Dylan M; Kwan, William C; Vidaurre, Diego; Teo, Leon; Homman-Ludiye, Jihane; Goodale, Melvyn A; Leopold, David A; Bourne, James A
2018-02-06
An evolutionary hallmark of anthropoid primates, including humans, is the use of vision to guide precise manual movements. These behaviors are reliant on a specialized visual input to the posterior parietal cortex. Here, we show that normal primate reaching-and-grasping behavior depends critically on a visual pathway through the thalamic pulvinar, which is thought to relay information to the middle temporal (MT) area during early life and then swiftly withdraws. Small MRI-guided lesions to a subdivision of the inferior pulvinar subnucleus (PIm) in the infant marmoset monkey led to permanent deficits in reaching-and-grasping behavior in the adult. This functional loss coincided with the abnormal anatomical development of multiple cortical areas responsible for the guidance of actions. Our study reveals that the transient retino-pulvinar-MT pathway underpins the development of visually guided manual behaviors in primates that are crucial for interacting with complex features in the environment.
A closer look at visually guided saccades in autism and Asperger’s disorder
Johnson, Beth P.; Rinehart, Nicole J.; Papadopoulos, Nicole; Tonge, Bruce; Millist, Lynette; White, Owen; Fielding, Joanne
2012-01-01
Motor impairments have been found to be a significant clinical feature associated with autism and Asperger’s disorder (AD) in addition to core symptoms of communication and social cognition deficits. Motor deficits in high-functioning autism (HFA) and AD may differentiate these disorders, particularly with respect to the role of the cerebellum in motor functioning. Current neuroimaging and behavioral evidence suggests greater disruption of the cerebellum in HFA than AD. Investigations of ocular motor functioning have previously been used in clinical populations to assess the integrity of the cerebellar networks, through examination of saccade accuracy and the integrity of saccade dynamics. Previous investigations of visually guided saccades in HFA and AD have only assessed basic saccade metrics, such as latency, amplitude, and gain, as well as peak velocity. We used a simple visually guided saccade paradigm to further characterize the profile of visually guided saccade metrics and dynamics in HFA and AD. It was found that children with HFA, but not AD, were more inaccurate across both small (5°) and large (10°) target amplitudes, and final eye position was hypometric at 10°. These findings suggest greater functional disturbance of the cerebellum in HFA than AD, and suggest fundamental difficulties with visual error monitoring in HFA. PMID:23162442
Visually Guided Control of Movement
NASA Technical Reports Server (NTRS)
Johnson, Walter W. (Editor); Kaiser, Mary K. (Editor)
1991-01-01
The papers given at an intensive, three-week workshop on visually guided control of movement are presented. The participants were researchers from academia, industry, and government, with backgrounds in visual perception, control theory, and rotorcraft operations. The papers included invited lectures and preliminary reports of research initiated during the workshop. Three major topics are addressed: extraction of environmental structure from motion; perception and control of self motion; and spatial orientation. Each topic is considered from both theoretical and applied perspectives. Implications for control and display are suggested.
PRIMUS: autonomous navigation in open terrain with a tracked vehicle
NASA Astrophysics Data System (ADS)
Schaub, Guenter W.; Pfaendner, Alfred H.; Schaefer, Christoph
2004-09-01
The German experimental robotics program PRIMUS (PRogram for Intelligent Mobile Unmanned Systems) is focused on solutions for autonomous driving in unknown open terrain, over several project phases under specific realization aspects for more than 12 years. The main task of the program is to develop algorithms for a high degree of autonomous navigation skills with off-the-shelf available hardware/sensor technology and to integrate this into military vehicles. For obstacle detection a Dornier-3D-LADAR is integrated on a tracked vehicle "Digitized WIESEL 2". For road-following a digital video camera and a visual perception module from the Universitaet der Bundeswehr Munchen (UBM) has been integrated. This paper gives an overview of the PRIMUS program with a focus on the last program phase D (2001 - 2003). This includes the system architecture, the description of the modes of operation and the technology development with the focus on obstacle avoidance and obstacle classification using a 3-D LADAR. A collection of experimental results and a short look at the next steps in the German robotics program will conclude the paper.
Novel approaches to helicopter obstacle warning
NASA Astrophysics Data System (ADS)
Seidel, Christian; Samuelis, Christian; Wegner, Matthias; Münsterer, Thomas; Rumpf, Thomas; Schwartz, Ingo
2006-05-01
EADS Germany is the world market leader in commercial Helicopter Laser Radar (HELLAS) Obstacle Warning Systems. The HELLAS-Warning System has been introduced into the market in 2000, is in service at German Border Control (Bundespolizei) and Royal Thai Airforce and is successfully evaluated by the Foreign Comparative Test Program (FCT) of the USSOCOM. Currently the successor system HELLAS-Awareness is in development. It will have extended sensor performance, enhanced realtime data processing capabilities and advanced HMI features. We will give an outline of the new sensor unit concerning detection technology and helicopter integration aspects. The system provides a widespread field of view with additional dynamic line of sight steering and a large detection range in combination with a high frame rate of 3Hz. The workflow of the data processing will be presented with focus on novel filter techniques and obstacle classification methods. As commonly known the former are indispensable due to unavoidable statistical measuring errors and solarisation. The amount of information in the filtered raw data is further reduced by ground segmentation. The remaining raised objects are extracted and classified in several stages into different obstacle classes. We will show the prioritization function which orders the obstacles concerning to their threat potential to the helicopter taking into account the actual flight dynamics. The priority of an object determines the display and provision of warnings to the pilot. Possible HMI representation includes video or FLIR overlay on multifunction displays, audio warnings and visualization of information on helmet mounted displays and digital maps. Different concepts will be presented.
Titiyal, Jeewan S; Kaur, Manpreet; Jose, Cijin P; Falera, Ruchita; Kinkar, Ashutosh; Bageshwar, Lalit Ms
2018-01-01
To compare toric intraocular lens (IOL) alignment assisted by image-guided surgery or manual marking methods and its impact on visual quality. This prospective comparative study enrolled 80 eyes with cataract and astigmatism ≥1.5 D to undergo phacoemulsification with toric IOL alignment by manual marking method using bubble marker (group I, n=40) or Callisto eye and Z align (group II, n=40). Postoperatively, accuracy of alignment and visual quality was assessed with a ray tracing aberrometer. Primary outcome measure was deviation from the target axis of implantation. Secondary outcome measures were visual quality and acuity. Follow-up was performed on postoperative days (PODs) 1 and 30. Deviation from the target axis of implantation was significantly less in group II on PODs 1 and 30 (group I: 5.5°±3.3°, group II: 3.6°±2.6°; p =0.005). Postoperative refractive cylinder was -0.89±0.35 D in group I and -0.64±0.36 D in group II ( p =0.003). Visual acuity was comparable between both the groups. Visual quality measured in terms of Strehl ratio ( p <0.05) and modulation transfer function (MTF) ( p <0.05) was significantly better in the image-guided surgery group. Significant negative correlation was observed between deviation from target axis and visual quality parameters (Strehl ratio and MTF) ( p <0.05). Image-guided surgery allows precise alignment of toric IOL without need for reference marking. It is associated with superior visual quality which correlates with the precision of IOL alignment.
Titiyal, Jeewan S; Kaur, Manpreet; Jose, Cijin P; Falera, Ruchita; Kinkar, Ashutosh; Bageshwar, Lalit MS
2018-01-01
Purpose To compare toric intraocular lens (IOL) alignment assisted by image-guided surgery or manual marking methods and its impact on visual quality. Patients and methods This prospective comparative study enrolled 80 eyes with cataract and astigmatism ≥1.5 D to undergo phacoemulsification with toric IOL alignment by manual marking method using bubble marker (group I, n=40) or Callisto eye and Z align (group II, n=40). Postoperatively, accuracy of alignment and visual quality was assessed with a ray tracing aberrometer. Primary outcome measure was deviation from the target axis of implantation. Secondary outcome measures were visual quality and acuity. Follow-up was performed on postoperative days (PODs) 1 and 30. Results Deviation from the target axis of implantation was significantly less in group II on PODs 1 and 30 (group I: 5.5°±3.3°, group II: 3.6°±2.6°; p=0.005). Postoperative refractive cylinder was −0.89±0.35 D in group I and −0.64±0.36 D in group II (p=0.003). Visual acuity was comparable between both the groups. Visual quality measured in terms of Strehl ratio (p<0.05) and modulation transfer function (MTF) (p<0.05) was significantly better in the image-guided surgery group. Significant negative correlation was observed between deviation from target axis and visual quality parameters (Strehl ratio and MTF) (p<0.05). Conclusion Image-guided surgery allows precise alignment of toric IOL without need for reference marking. It is associated with superior visual quality which correlates with the precision of IOL alignment. PMID:29731603
Cao, Yi; Wang, Chao-Qun; Xu, Feng; Jia, Xiu-Hong; Liu, Guang-Xue; Yang, Sheng-Chao; Long, Guang-Qiang; Chen, Zhong-Jian; Wei, Fu-Zhou; Yang, Shao-Zhou; Fukuda, Kozo; Wang, Xuan; Cai, Shao-Qing
2016-10-01
Panax notoginseng is a commonly used traditional Chinese medicine with blood activating effect while has continuous cropping obstacle problem in planting process. In present study, a semimicroextraction method with water-saturated n-butanol on 0.1 g notoginseng sample was established with good repeatability (RSD<2.5%) and 9.6%-20.6% higher extraction efficiency of seven saponins than the conventional method. A total of 16 characteristic peaks were identified by LC-MS-IT-TOF, including eight 20(S)-protopanaxatriol (PPT) type saponins and eight 20(S)-protopanaxadiol (PPD) type saponins. The established method was utilized to evaluate the quality of notoginseng samples cultivated by manual intervened methods to overcome continuous cropping obstacles.As a result, HPLC fingerprint similarity, content of Fa and ratio of notoginsenoside K and notoginsenoside Fa (N-K/Fa) were found out to be as valuatable markers of the quality of samples in continuous cropping obstacle research, of which N-K/Fa could also be applied to the analysis of notoginseng samples with different growth years.Notoginseng samples with continuous cropping obstacle had HPLC fingerprint similarity lower than 0.87, in consistent with normal sample, and had significant lower content of notoginsenoside Fa and significant higher N-K/Fa (2.35-4.74) than normal group (0.45-1.33). All samples in the first group with manual intervention showed high similarity with normal group (>0.87), similar content of common peaks and N-K/Fa (0.42-2.06). The content of notoginsenoside K in the second group with manual intervention was higher than normal group. All samples except two displayed similarity higher than 0.87 and possessed content of 16 saponins close to normal group. The result showed that notoginseng samples with continuous cropping obstacle had lower quality than normal sample. And manual intervened methods could improve their quality in different levels.The method established in this study was simple, fast and accurate, and the markers may provide new guides for quality control in continuous cropping obstacle research of notoginseng. Copyright© by the Chinese Pharmaceutical Association.
ERIC Educational Resources Information Center
Rose, Susan; And Others
Three papers focus on applications of computer graphics with deaf and severely language impaired children. The first describes a drawing tablet software that allowed students to use visual and manipulative characteristics to enhance problem solving and creativity skills. Students were thus able to solve problems without the obstacles of language.…
Toward autonomous rotorcraft flight in degraded visual environments: experiments and lessons learned
NASA Astrophysics Data System (ADS)
Stambler, Adam; Spiker, Spencer; Bergerman, Marcel; Singh, Sanjiv
2016-05-01
Unmanned cargo delivery to combat outposts will inevitably involve operations in degraded visual environments (DVE). When DVE occurs, the aircraft autonomy system needs to be able to function regardless of the obscurant level. In 2014, Near Earth Autonomy established a baseline perception system for autonomous rotorcraft operating in clear air conditions, when its m3 sensor suite and perception software enabled autonomous, no-hover landings onto unprepared sites populated with obstacles. The m3's long-range lidar scanned the helicopter's path and the perception software detected obstacles and found safe locations for the helicopter to land. This paper presents the results of initial tests with the Near Earth perception system in a variety of DVE conditions and analyzes them from the perspective of mission performance and risk. Tests were conducted with the m3's lidar and a lightweight synthetic aperture radar in rain, smoke, snow, and controlled brownout experiments. These experiments showed the capability to penetrate through mild DVE but the perceptual capabilities became degraded with the densest brownouts. The results highlight the need for not only improved ability to see through DVE, but also for improved algorithms to monitor and report DVE conditions.
Atmospheric dispersion of a heavier-than-air gas near a two-dimensional obstacle
NASA Astrophysics Data System (ADS)
Sutton, S. B.; Brandt, H.; White, B. R.
1986-04-01
Flow over a two-dimensional obstacle and dispersion of a heavier-than-air gas near the obstacle were studied. Two species, one representing air and the other representing the heavier-than-air gas were treated. Equations for mass and momentum were cast in mass-averaged form, with turbulent Reynolds stresses and mass fluxes modeled using eddy-viscosity and diffusivity hypotheses. A two-equation k-ɛ turbulence model was used to determine the effective turbulent viscosity. Streamline curvature and buoyancy corrections were added to the basic turbulence formulation. The model equations were solved using finite difference techniques. An alternating-direction-implicit (ADI) technique was used to solve the parabolic transport equations and a direct matrix solver was used to solve the elliptic pressure equation. Mesh sensitivities were investigated to determine the optimum mesh requirements for the final calculations. It was concluded that at least 10 grid spaces were required across the obstacle width and 15 across the obstacle height to obtain valid solutions. A non-uniform mesh was used to concentrate the grid points at the top of the obstacle. Experimental measurements were made with air flow over a 7.6 by 7.6 cm obstacle in a boundary-layer wind tunnel. Smoke visualization revealed a low-frequency oscillation of the bubble downstream of the obstacle. Hot-wire anemometer data are presented for the mean velocity and turbulent kinetic energy at the mid-plane of the obstacle and the mid-plane of the downstream recirculation bubble. A single hot-wire probe was found to be suitable for determining mean streamwise velocities with an accuracy of 11 %. The downstream recirculation bubble was unsteady and had a length range from 3 to 8 obstacle lengths. The experimental results for flow over the obstacle were compared with numerical calculations to validate the numerical solution procedure. A sensitivity study on the effect of curvature correction and variation of turbulence model constants on the numerical solution was conducted. Calculations that included the curvature correction model gave a downstream recirculation bubble length of 5.9 obstacle lengths while excluding the correction reduced this length to 4.4. In the second part of the study, numerical calculations were performed for the dispersion of a heavier-than-air gas in the vicinity of the two-dimensional obstacle. Characteristics of an adiabatic boundary layer were used in these calculations. The densities of the contaminant gases were 0, 25 and 50% greater than the air density. Calculations were performed with the contaminant injection source upstream and downstream of the obstacle. Use of the pressure gradient model reduced the size of the dense gas cloud by as much as 12%. The curvature correction model also affected the cloud expanse by reducing the effective turbulent viscosity in the downstream recirculation bubble. The location of the injection source had the largest impact on the cloud size. The area of the cloud within the 5 % contour was three times larger for downstream injection than for upstream injection.
ARTIFICIAL LIGHTING FOR MODERN SCHOOLS, A GUIDE FOR ADMINISTRATIVE USE.
ERIC Educational Resources Information Center
REIDA, GEORGE W.; AND OTHERS
THE DEVELOPMENT OF GOOD VISUAL ENVIRONMENT AND ECONOMICALLY FEASIBLE LIGHTING INSTALLATIONS IN SCHOOLS IS DISCUSSED IN THIS GUIDE. EIGHTY PERCENT OF ALL SCHOOL LEARNING IS GAINED THROUGH THE EYES AS ESTIMATED BY THE U.S. OFFICE OF EDUCATION. GOOD SCHOOL LIGHTING IS COMFORTABLE, GLAREFREE AND ADEQUATE FOR THE VISUAL TASK. EYE STRAIN AND UNNECESSARY…
The Role of Clarity and Blur in Guiding Visual Attention in Photographs
ERIC Educational Resources Information Center
Enns, James T.; MacDonald, Sarah C.
2013-01-01
Visual artists and photographers believe that a viewer's gaze can be guided by selective use of image clarity and blur, but there is little systematic research. In this study, participants performed several eye-tracking tasks with the same naturalistic photographs, including recognition memory for the entire photo, as well as recognition memory…
Self-Study and Evaluation Guide/1979 Edition. Section D-16: Other Service Program.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
The self evaluation guide is explained to be designed for accreditation of services to blind and visually handicapped students in service programs for which the NAC (National Accreditation Council for Agencies Serving the Blind and Visually Handicapped) does not have specific program standards (such as radio reading services and library services).…
ERIC Educational Resources Information Center
Byun, Tara McAllister; Hitchcock, Elaine R.; Ferron, John
2017-01-01
Purpose: Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of…
Wisconsin School for the Visually Handicapped. A Curriculum Guide for Students. Bulletin No. 7393.
ERIC Educational Resources Information Center
Wisconsin State Dept. of Public Instruction, Madison. Div. for Handicapped Children and Pupil Services.
The curriculum guide sets forth the course of study at the Wisconsin School for the Visually Handicapped. An initial section presents the school's philosophy regarding the need for specialty skills to be incorporated into regular academic instruction. The content of the primary and elementary programs (kindergarten through grade 6) is reviewed in…
K9 Buddies: A Program of Guide Dogs for the Blind
ERIC Educational Resources Information Center
Ritter, Joanne
2007-01-01
Today, exceptional dogs that have been specially bred and socialized are paired with children who are blind or visually impaired. These dogs, called "K9 Buddies," are from Guide Dogs for the Blind, a national nonprofit organization with a mission to offer skilled mobility dogs and training free-of-charge to adults with visual impairments…
Vargas-Martín, Fernando; García-Pérez, Miguel A
2005-08-01
"Looked-but-failed-to-see" errors are a common cause of accidents, but it has never been determined whether obstructive elements within an automobile (e.g., window posts or the interior rearview mirror) have actually been an obstacle to vision. This work describes a technique that can easily be used to determine the available visual field of drivers at the wheel and illustrates its potential in a number of applications. The technique involves calibrating a minicamera for use as a device for perimetry and then mounting it on spectacles so that it lies between the eyes of the subject who wears them. With the spectacle-mounted camera worn by a driver, snapshots were taken when the automobile was parked and the driver looked in different directions, and video sequences were recorded during natural driving in an urban area and on a winding mountain road. All of the automobiles studied place obstacles to vision for any given direction of gaze, although the resultant scotomata have different sizes and are placed in different regions of the visual field for each combination of car and driver. These regions encroach into central vision as drivers turn their head and eyes as required by the characteristics of the road or the urban area during natural driving, in some cases resulting in very poor visibility regardless of the good vision of the driver and the certification of the automobile. Our technique is useful for determining what parts of a given scene are visible to a given driver on a given automobile and, hence, it is useful not only as a tool for accident investigation and in visual ergonomics, but also as an aid for the design of automobiles and road environments.
General visual robot controller networks via artificial evolution
NASA Astrophysics Data System (ADS)
Cliff, David; Harvey, Inman; Husbands, Philip
1993-08-01
We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.
Occupant Injury Severity and Accident Causes in Helicopter Emergency Medical Services (1983-2014).
Boyd, Douglas D; Macchiarella, Nickolas D
2016-01-01
Helicopter emergency medical services (HEMS) transport critically ill patients to/between emergency care facilities and operate in a hazardous environment: the destination site is often encumbered with obstacles, difficult to visualize at night, and lack instrument approaches for degraded visibility. The study objectives were to determine 1) HEMS accident rates and causes; 2) occupant injury severity profiles; and 3) whether accident aircraft were certified to the more stringent crashworthiness standards implemented two decades ago. The National Transportation Safety Board (NTSB) aviation accident database was used to identify HEMS mishaps for the years spanning 1983-2014. Contingency tables (Pearson Chi-square or Fisher's exact test) were used to determine differences in proportions. A generalized linear model (Poisson distribution) was used to determine if accident rates differed over time. While the HEMS accident rate decreased by 71% across the study period, the fraction of fatal accidents (36-50%) and the injury severity profiles were unchanged. None of the accident aircraft fully satisfied the current crashworthiness standards. Failure to clear obstacles and visual-to-instrument flight, the most frequent accident causes (37 and 26%, respectively), showed a downward trend, whereas accidents ascribed to aircraft malfunction showed an upward trend over time. HEMS operators should consider updating their fleet to the current, more stringent crashworthiness standards in an attempt to reduce injury severity. Additionally, toward further mitigating accidents ascribed to inadvertent visual-to-instrument conditions, HEMS aircraft should be avionics-equipped for instrument flight rules flight.
Sensor supported pilot assistance for helicopter flight in DVE
NASA Astrophysics Data System (ADS)
Waanders, Tim; Münsterer, T.; Kress, M.
2013-05-01
Helicopter operations at low altitude are to this day only performed under VFR conditions in which safe piloting of the aircraft relies on the pilot's visual perception of the outside environment. However, there are situations in which a deterioration of visibility conditions may cause the pilot to lose important visual cues thereby increasing workload and compromising flight safety and mission effectiveness. This paper reports on a pilot assistance system for all phases of flight which is intended to: • Provide navigational support and mission management • Support landings/take-offs in unknown environment and in DVE • Enhance situational awareness in DVE • Provide obstacle and terrain surface detection and warning • Provide upload, sensor based update and download of database information for debriefing and later missions. The system comprises a digital terrain and obstacle database, tactical information, flight plan management combined with an active 3D sensor enabling the above mentioned functionalities. To support pilots during operations in DVE, an intuitive 3D/2D cueing through both head-up and head-down means is proposed to retain situational awareness. This paper further describes the system concept and will elaborate on results of simulator trials in which the functionality was evaluated by operational pilots in realistic and demanding scenarios such as a SAR mission to be performed in mountainous area under different visual conditions. The objective of the simulator trials was to evaluate the functional integration and HMI definition for the NH90 Tactical Transport Helicopter.
Guide to Rebuilding Governance in Stability Operations: A Role for the Military?
2009-06-01
departments to assess needs, develop a list of necessary parts and equipment, and prepare an action plan for restoration of services. With rapid...for drafting, reviewing, and voting on a new constitution was pursued. The Rwandan government’s action plan included: training and sensitization of...together to build collaborative relationships, share strategies for addressing obstacles, and develop joint action plans . Provide opportunities for CSOs to
Remote-controlled vision-guided mobile robot system
NASA Astrophysics Data System (ADS)
Ande, Raymond; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.
Schomaker, Judith; Walper, Daniel; Wittmann, Bianca C; Einhäuser, Wolfgang
2017-04-01
In addition to low-level stimulus characteristics and current goals, our previous experience with stimuli can also guide attentional deployment. It remains unclear, however, if such effects act independently or whether they interact in guiding attention. In the current study, we presented natural scenes including every-day objects that differed in affective-motivational impact. In the first free-viewing experiment, we presented visually-matched triads of scenes in which one critical object was replaced that varied mainly in terms of motivational value, but also in terms of valence and arousal, as confirmed by ratings by a large set of observers. Treating motivation as a categorical factor, we found that it affected gaze. A linear-effect model showed that arousal, valence, and motivation predicted fixations above and beyond visual characteristics, like object size, eccentricity, or visual salience. In a second experiment, we experimentally investigated whether the effects of emotion and motivation could be modulated by visual salience. In a medium-salience condition, we presented the same unmodified scenes as in the first experiment. In a high-salience condition, we retained the saturation of the critical object in the scene, and decreased the saturation of the background, and in a low-salience condition, we desaturated the critical object while retaining the original saturation of the background. We found that highly salient objects guided gaze, but still found additional additive effects of arousal, valence and motivation, confirming that higher-level factors can also guide attention, as measured by fixations towards objects in natural scenes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multi-AUV Target Search Based on Bioinspired Neurodynamics Model in 3-D Underwater Environments.
Cao, Xiang; Zhu, Daqi; Yang, Simon X
2016-11-01
Target search in 3-D underwater environments is a challenge in multiple autonomous underwater vehicles (multi-AUVs) exploration. This paper focuses on an effective strategy for multi-AUV target search in the 3-D underwater environments with obstacles. First, the Dempster-Shafer theory of evidence is applied to extract information of environment from the sonar data to build a grid map of the underwater environments. Second, a topologically organized bioinspired neurodynamics model based on the grid map is constructed to represent the dynamic environment. The target globally attracts the AUVs through the dynamic neural activity landscape of the model, while the obstacles locally push the AUVs away to avoid collision. Finally, the AUVs plan their search path to the targets autonomously by a steepest gradient descent rule. The proposed algorithm deals with various situations, such as static targets search, dynamic targets search, and one or several AUVs break down in the 3-D underwater environments with obstacles. The simulation results show that the proposed algorithm is capable of guiding multi-AUV to achieve search task of multiple targets with higher efficiency and adaptability compared with other algorithms.
Laskowski-Jones, Linda; Caudell, Michael J; Hawkins, Seth C; Jones, Lawrence J; Dymond, Chelsea A; Cushing, Tracy; Gupta, Sanjey; Young, David S; Starling, Jennifer M; Bounds, Richard
2017-10-01
Obstacle, adventure and endurance competitions in challenging or remote settings are increasing in popularity. A literature search indicates a dearth of evidence-based research on the organisation of medical care for wilderness competitions. The organisation of medical care for each event is best tailored to specific race components, participant characteristics, geography, risk assessments, legal requirements, and the availability of both local and outside resources. Considering the health risks and logistical complexities inherent in these events, there is a compelling need for guiding principles that bridge the fields of wilderness medicine and sports medicine in providing a framework for the organisation of medical care delivery during wilderness and remote obstacle, adventure and endurance competitions. This narrative review, authored by experts in wilderness and operational medicine, provides such a framework. The primary goal is to assist organisers and medical providers in planning for sporting events in which participants are in situations or locations that exceed the capacity of local emergency medical services resources. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Clinical and Laboratory Evaluation of Peripheral Prism Glasses for Hemianopia
Giorgi, Robert G.; Woods, Russell L.; Peli, Eli
2008-01-01
Purpose Homonymous hemianopia (the loss of vision on the same side in each eye) impairs the ability to navigate and walk safely. We evaluated peripheral prism glasses as a low vision optical device for hemianopia in an extended wearing trial. Methods Twenty-three patients with complete hemianopia (13 right) with neither visual neglect nor cognitive deficit enrolled in the 5-visit study. To expand the horizontal visual field, patients’ spectacles were fitted with both upper and lower Press-On™ Fresnel prism segments (each 40 prism diopters) across the upper and lower portions of the lens on the hemianopic (“blind”) side. Patients were asked to wear these spectacles as much as possible for the duration of the study, which averaged 9 (range: 5 to 13) weeks. Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived quality of life were measured. Results Clinical Success: 14 of 21 (67%) patients chose to continue to wear the peripheral prism glasses at the end of the study (2 patients did not complete the study for non-vision reasons). At long-term follow-up (8 to 51 months), 5 of 12 (42%) patients reported still wearing the device. Visual Field Expansion: Expansion of about 22 degrees in both the upper and lower quadrants was demonstrated for all patients (binocular perimetry, Goldmann V4e). Perceived Direction: Two patients demonstrated a transient adaptation to the change in visual direction produced by the peripheral prism glasses. Quality of Life: At study end, reduced difficulty noticing obstacles on the hemianopic side was reported. Conclusions The peripheral prism glasses provided reported benefits (usually in obstacle avoidance) to 2/3 of the patients completing the study, a very good success rate for a vision rehabilitation device. Possible reasons for long-term discontinuation and limited adaptation of perceived direction are discussed. PMID:19357552
ERIC Educational Resources Information Center
Gaver, Wayne
Presented is an industrial arts curriculum guide for woodworking which developed out of a 3 year program designed to meet the unmet vocational education needs of visually impaired students enrolled in junior high, secondary, and community colleges in a five county region of California, and to provide inservice training to regular vocational…
Enhancing visual search abilities of people with intellectual disabilities.
Li-Tsang, Cecilia W P; Wong, Jackson K K
2009-01-01
This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using either motion contrast or additional cue to basic search task. Repeated measure ANOVA and post hoc multiple comparison tests were used to compare each cue strategy. Results showed that the use of guided strategies was able to capture focal attention in an autonomic manner in the ID group (Pillai's Trace=5.99, p<0.0001). Both guided cue and guided motion search tasks demonstrated functionally similar effects that confirmed the non-specific character of salience. These findings suggested that the visual search efficiency of people with ID was greatly improved if the target was made salient using cueing effect when the complexity of the display increased (i.e. set size increased). This study could have an important implication for the design of the visual searching format of any computerized programs developed for people with ID in learning new tasks.
Onshore industrial wind turbine locations for the United States
Diffendorfer, Jay E.; Compton, Roger; Kramer, Louisa; Ancona, Zach; Norton, Donna
2017-01-01
This dataset provides industrial-scale onshore wind turbine locations in the United States, corresponding facility information, and turbine technical specifications. The database has wind turbine records that have been collected, digitized, locationally verified, and internally quality controlled. Turbines from the Federal Aviation Administration Digital Obstacles File, through product release date July 22, 2013, were used as the primary source of turbine data points. The dataset was subsequently revised and reposted as described in the revision histories for the report. Verification of the turbine positions was done by visual interpretation using high-resolution aerial imagery in Environmental Systems Research Institute (Esri) ArcGIS Desktop. Turbines without Federal Aviation Administration Obstacles Repository System numbers were visually identified and point locations were added to the collection. We estimated a locational error of plus or minus 10 meters for turbine locations. Wind farm facility names were identified from publicly available facility datasets. Facility names were then used in a Web search of additional industry publications and press releases to attribute additional turbine information (such as manufacturer, model, and technical specifications of wind turbines). Wind farm facility location data from various wind and energy industry sources were used to search for and digitize turbines not in existing databases. Technical specifications for turbines were assigned based on the wind turbine make and model as described in literature, specifications listed in the Federal Aviation Administration Digital Obstacles File, and information on the turbine manufacturer’s Web site. Some facility and turbine information on make and model did not exist or was difficult to obtain. Thus, uncertainty may exist for certain turbine specifications. That uncertainty was rated and a confidence was recorded for both location and attribution data quality.
Visual Recognition of the Elderly Concerning Risks of Falling or Stumbling Indoors in the Home
Katsura, Toshiki; Miura, Norio; Hoshino, Akiko; Usui, Kanae; Takahashi, Yasuro; Hisamoto, Seiichi
2011-01-01
Objective: The objective of this study was to verify the recognition of dangers and obstacles within a house in the elderly when walking based on analyses of gaze point fixation. Materials and Methods: The rate of recognizing indoor dangers was compared among 30 elderly, 14 middle-aged and 11 young individuals using the Eye Mark Recorder. Results: 1) All of the elderly, middle-aged and young individuals showed a high recognition rate of 100% or near 100% when ascending outdoor steps but a low rate of recognizing obstacles placed on the steps. They showed a recognition rate of about 60% when descending steps from residential premises to the street. The rate of recognizing middle steps in the elderly was significantly lower than that in younger and middle-aged individuals. Regarding recognition indoors, when ascending stairs, all of the elderly, middle-aged and young individuals showed a high recognition rate of nearly 100%. When descending stairs, they showed a recognition rate of 70-90%. However, although the recognition rate in the elderly was lower than in younger and middle-aged individuals, no significant difference was observed. 2) When moving indoors, all of the elderly, middle-aged and young individuals showed a recognition rate of 70%-80%. The recognition rate was high regarding obstacles such as floors, televisions and chests of drawers but low for obstacles in the bathroom and steps on the path. The rate of recognizing steps of doorsills forming the division between a Japanese-style room and corridor as well as obstacles in a Japanese-style room was low, and the rate in the elderly was low, being 40% or less. Conclusion: The rate of recognizing steps of doorsills as well as obstacles in a Japanese-style room was lower in the elderly in comparison with middle-aged or young individuals. PMID:25648876
Tight coordination of aerial flight maneuvers and sonar call production in insectivorous bats.
Falk, Benjamin; Kasnadi, Joseph; Moss, Cynthia F
2015-11-01
Echolocating bats face the challenge of coordinating flight kinematics with the production of echolocation signals used to guide navigation. Previous studies of bat flight have focused on kinematics of fruit and nectar-feeding bats, often in wind tunnels with limited maneuvering, and without analysis of echolocation behavior. In this study, we engaged insectivorous big brown bats in a task requiring simultaneous turning and climbing flight, and used synchronized high-speed motion-tracking cameras and audio recordings to quantify the animals' coordination of wing kinematics and echolocation. Bats varied flight speed, turn rate, climb rate and wingbeat rate as they navigated around obstacles, and they adapted their sonar signals in patterning, duration and frequency in relation to the timing of flight maneuvers. We found that bats timed the emission of sonar calls with the upstroke phase of the wingbeat cycle in straight flight, and that this relationship changed when bats turned to navigate obstacles. We also characterized the unsteadiness of climbing and turning flight, as well as the relationship between speed and kinematic parameters. Adaptations in the bats' echolocation call frequency suggest changes in beam width and sonar field of view in relation to obstacles and flight behavior. By characterizing flight and sonar behaviors in an insectivorous bat species, we find evidence of exquisitely tight coordination of sensory and motor systems for obstacle navigation and insect capture. © 2015. Published by The Company of Biologists Ltd.
APFELBAUM, HENRY; PELAH, ADAR; PELI, ELI
2007-01-01
Virtual reality locomotion simulators are a promising tool for evaluating the effectiveness of vision aids to mobility for people with low vision. This study examined two factors to gain insight into the verisimilitude requirements of the test environment: the effects of treadmill walking and the suitability of using controls as surrogate patients. Ten “tunnel vision” patients with retinitis pigmentosa (RP) were tasked with identifying which side of a clearly visible obstacle their heading through the virtual environment would lead them, and were scored both on accuracy and on their distance from the obstacle when they responded. They were tested both while walking on a treadmill and while standing, as they viewed a scene representing progress through a shopping mall. Control subjects, each wearing a head-mounted field restriction to simulate the vision of a paired patient, were also tested. At wide angles of approach, controls and patients performed with a comparably high degree of accuracy, and made their choices at comparable distances from the obstacle. At narrow angles of approach, patients’ accuracy increased when walking, while controls’ accuracy decreased. When walking, both patients and controls delayed their decisions until closer to the obstacle. We conclude that a head-mounted field restriction is not sufficient for simulating tunnel vision, but that the improved performance observed for walking compared to standing suggests that a walking interface (such as a treadmill) may be essential for eliciting natural perceptually-guided behavior in virtual reality locomotion simulators. PMID:18167511
Apfelbaum, Henry; Pelah, Adar; Peli, Eli
2007-01-01
Virtual reality locomotion simulators are a promising tool for evaluating the effectiveness of vision aids to mobility for people with low vision. This study examined two factors to gain insight into the verisimilitude requirements of the test environment: the effects of treadmill walking and the suitability of using controls as surrogate patients. Ten "tunnel vision" patients with retinitis pigmentosa (RP) were tasked with identifying which side of a clearly visible obstacle their heading through the virtual environment would lead them, and were scored both on accuracy and on their distance from the obstacle when they responded. They were tested both while walking on a treadmill and while standing, as they viewed a scene representing progress through a shopping mall. Control subjects, each wearing a head-mounted field restriction to simulate the vision of a paired patient, were also tested. At wide angles of approach, controls and patients performed with a comparably high degree of accuracy, and made their choices at comparable distances from the obstacle. At narrow angles of approach, patients' accuracy increased when walking, while controls' accuracy decreased. When walking, both patients and controls delayed their decisions until closer to the obstacle. We conclude that a head-mounted field restriction is not sufficient for simulating tunnel vision, but that the improved performance observed for walking compared to standing suggests that a walking interface (such as a treadmill) may be essential for eliciting natural perceptually-guided behavior in virtual reality locomotion simulators.
A Closer Look at Visual Manuals.
ERIC Educational Resources Information Center
van der Meij, Hans
1996-01-01
Examines the visual manual genre, discussing main forms and functions of step-by-step and guided tour manuals in detail. Examines whether a visual manual helps computer users realize tasks faster and more accurately than a non-visual manual. Finds no effects on accuracy, but speedier task execution by 35% for visual manuals. Concludes there is no…
Teaching Students with Visual Impairments. Programming for Students with Special Needs. No. 5.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton. Special Education Branch.
This resource guide offers suggestions and resources to help provide successful school experiences for students who are blind or visually impaired. Individual sections address: (1) the nature of visual impairment, the specific needs and expectations of students with visual impairment, and the educational implications of visual impairment; (2)…
Computerized Biomechanical Man-Model
1976-07-01
Force Systems Command Wright-Patterson AFB, Ohio ABSTRACT The COMputerized BIomechanical MAN-Model (called COMBIMAN) is a computer interactive graphics...concept was to build a mock- The use of mock-ups for biomechanical evalua- up which permitted the designer to visualize the tion has long been a tool...of the can become an obstacle to design change. Aerospace Medical Research Laboratory, we are developing a computerized biomechanical man-model
Korneeva, E V; Tiunova, A A; Aleksandrov, L I; Golubeva, T B; Anokhin, K V
2014-01-01
The present study analyzed expression of transcriptional factors c-Fos and ZENK in 9-day-old pied flycatcher nestlings' (Ficedula hypoleuca) telencephalic auditory centers (field L, caudomedial nidopallium and caudomedial mesopallium) involved in the acoustically-guided defense behavior. Species-typical alarm call was presented to the young in three groups: 1--intact group (sighted control), 2--nestlings visually deprived just before the experiment for a short time (unsighted control) 3--nestlings visually deprived right after hatching (experimental deprivation). Induction of c-Fos as well as ZENK in nestlings from the experimental deprivation group was decreased in both hemispheres as compared with intact group. In the group of unsighted control, only the decrease of c-Fos induction was observed exclusively in the right hemisphere. These findings suggest that limitation of visual input changes the population of neurons involved into the acoustically-guided behavior, the effect being dependant from the duration of deprivation.
A novel computational model to probe visual search deficits during motor performance
Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy
2016-01-01
Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596
Application of a clustering-remote sensing method in analyzing security patterns
NASA Astrophysics Data System (ADS)
López-Caloca, Alejandra; Martínez-Viveros, Elvia; Chapela-Castañares, José Ignacio
2009-04-01
In Mexican academic and government circles, research on criminal spatial behavior has been neglected. Only recently has there been an interest in criminal data geo-reference. However, more sophisticated spatial analyses models are needed to disclose spatial patterns of crime and pinpoint their changes overtime. The main use of these models lies in supporting policy making and strategic intelligence. In this paper we present a model for finding patterns associated with crime. It is based on a fuzzy logic algorithm which finds the best fit within cluster numbers and shapes of groupings. We describe the methodology for building the model and its validation. The model was applied to annual data for types of felonies from 2005 to 2006 in the Mexican city of Hermosillo. The results are visualized as a standard deviational ellipse computed for the points identified to be a "cluster". These areas indicate a high to low demand for public security, and they were cross-related to urban structure analyzed by SPOT images and statistical data such as population, poverty levels, urbanization, and available services. The fusion of the model results with other geospatial data allows detecting obstacles and opportunities for crime commission in specific high risk zones and guide police activities and criminal investigations.
Bernal, Giovanna M; LaRiviere, Michael J; Mansour, Nassir; Pytel, Peter; Cahill, Kirk E; Voce, David J; Kang, Shijun; Spretz, Ruben; Welp, Ulrich; Noriega, Sandra E; Nunez, Luis; Larsen, Gustavo F; Weichselbaum, Ralph R; Yamini, Bakhtiar
2014-01-01
A major obstacle to the management of malignant glioma is the inability to effectively deliver therapeutic agent to the tumor. In this study, we describe a polymeric nanoparticle vector that not only delivers viable therapeutic, but can also be tracked in vivo using MRI. Nanoparticles, produced by a non-emulsion technique, were fabricated to carry iron oxide within the shell and the chemotherapeutic agent, temozolomide (TMZ), as the payload. Nanoparticle properties were characterized and subsequently their endocytosis-mediated uptake by glioma cells was demonstrated. Convection-enhanced delivery (CED) can disperse nanoparticles through the rodent brain and their distribution is accurately visualized by MRI. Infusion of nanoparticles does not result in observable animal toxicity relative to control. CED of TMZ-bearing nanoparticles prolongs the survival of animals with intracranial xenografts compared to control. In conclusion, the described nanoparticle vector represents a unique multifunctional platform that can be used for image-guided treatment of malignant glioma. GBM remains one of the most notoriously treatment-unresponsive cancer types. In this study, a multifunctional nanoparticle-based temozolomide delivery system was demonstrated to possess enhanced treatment efficacy in a rodent xenograft GBM model, with the added benefit of MRI-based tracking via the incorporation of iron oxide as a T2* contrast material in the nanoparticles. © 2014.
Using Visual Imagery in the Classroom.
ERIC Educational Resources Information Center
Grabow, Beverly
1981-01-01
The use of visual imagery, visualization, and guided and unguided fantasy has potential as a teaching tool for use with learning disabled children. Visualization utilized in a gamelike atmosphere can help the student learn new concepts, can positively effect social behaviors, and can help with emotional control. (SB)
ERIC Educational Resources Information Center
Cangemi, Sam
This guide describes and illustrates 50 perceptual games for preschool children which may be constructed by teachers. Inexpensive, easily obtained game materials are suggested. The use of tactile and visual perceptual games gives children opportunities to make choices and discriminations, and provides reading readiness experiences. Games depicted…
Local navigation and fuzzy control realization for autonomous guided vehicle
NASA Astrophysics Data System (ADS)
El-Konyaly, El-Sayed H.; Saraya, Sabry F.; Shehata, Raef S.
1996-10-01
This paper addresses the problem of local navigation for an autonomous guided vehicle (AGV) in a structured environment that contains static and dynamic obstacles. Information about the environment is obtained via a CCD camera. The problem is formulated as a dynamic feedback control problem in which speed and steering decisions are made on the fly while the AGV is moving. A decision element (DE) that uses local information is proposed. The DE guides the vehicle in the environment by producing appropriate navigation decisions. Dynamic models of a three-wheeled vehicle for driving and steering mechanisms are derived. The interaction between them is performed via the local feedback DE. A controller, based on fuzzy logic, is designed to drive the vehicle safely in an intelligent and human-like manner. The effectiveness of the navigation and control strategies in driving the AGV is illustrated and evaluated.
Creative Visualization Activities.
ERIC Educational Resources Information Center
Fugitt, Eva D.
1986-01-01
Presents a series of classroom exercises and activities that stimulate children's creativity through the use of visualization. Discusses procedures for guided imagery and offers some examples of "trips" to imaginary places. Proposes visualization as a warm-up exercise before art lessons. (DR)
Primary Visual Cortex as a Saliency Map: A Parameter-Free Prediction and Its Test by Behavioral Data
Zhaoping, Li; Zhe, Li
2015-01-01
It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention. PMID:26441341
Acceptance of Dog Guides and Daily Stress Levels of Dog Guide Users and Nonusers
ERIC Educational Resources Information Center
Matsunaka, Kumiko; Koda, Naoko
2008-01-01
The degree of acceptance of dog guides at public facilities, which is required by law in Japan, was investigated, and evidence of rejection was found. Japanese people with visual impairments who used dog guides reported higher daily stress levels than did those who did not use dog guides. (Contains 3 tables and 1 figure.)
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on orientation and mobility services (dog guide program emphasis) is one of 28 guides designed for organizations undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of…
ERIC Educational Resources Information Center
Community Coll. of Rhode Island, Warwick.
This implementation guide contains information based on experiences that occurred during the development and implementation of the Rhode Island Tech Prep Model. It is intended to assist educators in addressing challenges and obstacles faced by the program early in the planning process. It begins with a rationale for tech prep. Rhode Island…
Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance
2007-03-31
IFinal 03/01/04 - 02/28/07 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Neuromorphic VLSI-based Bat Echolocation for Micro-aerial 5b.GRANTNUMBER Vehicle...uncovered interesting new issues in our choice for representing the intensity of signals. We have just finished testing the first chip version of an echo...timing-based algorithm (’openspace’) for sonar-guided navigation amidst multiple obstacles. 15. SUBJECT TERMS Neuromorphic VLSI, bat echolocation
NASA Astrophysics Data System (ADS)
Schmerwitz, S.; Doehler, H.-U.; Ellis, K.; Jennings, S.
2011-06-01
The DLR project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites) is devoted to demonstrating and evaluating the characteristics of sensors for helicopter operations in degraded visual environments. Millimeter wave radar is one of the many sensors considered for use in brown-out. It delivers a lower angular resolution compared to other sensors, however it may provide the best dust penetration capabilities. In cooperation with the NRC, flight tests on a Bell 205 were conducted to gather sensor data from a 35 GHz pencil beam radar for terrain mapping, obstacle detection and dust penetration. In this paper preliminary results from the flight trials at NRC are presented and a description of the radars general capability is shown. Furthermore, insight is provided into the concept of multi-sensor fusion as attempted in the ALLFlight project.
Clothing Construction: An Instructional Package with Adaptations for Visually Impaired Individuals.
ERIC Educational Resources Information Center
Crawford, Glinda B.; And Others
Developed for the home economics teacher of mainstreamed visually impaired students, this guide provides clothing instruction lesson plans for the junior high level. First, teacher guidelines are given, including characteristics of the visually impaired, orienting such students to the classroom, orienting class members to the visually impaired,…
Neural correlates of learning and trajectory planning in the posterior parietal cortex
Torres, Elizabeth B.; Quian Quiroga, Rodrigo; Cui, He; Buneo, Christopher A.
2013-01-01
The posterior parietal cortex (PPC) is thought to play an important role in the planning of visually-guided reaching movements. However, the relative roles of the various subdivisions of the PPC in this function are still poorly understood. For example, studies of dorsal area 5 point to a representation of reaches in both extrinsic (endpoint) and intrinsic (joint or muscle) coordinates, as evidenced by partial changes in preferred directions and positional discharge with changes in arm posture. In contrast, recent findings suggest that the adjacent medial intraparietal area (MIP) is involved in more abstract representations, e.g., encoding reach target in visual coordinates. Such a representation is suitable for planning reach trajectories involving shortest distance paths to targets straight ahead. However, it is currently unclear how MIP contributes to the planning of other types of trajectories, including those with various degrees of curvature. Such curved trajectories recruit different joint excursions and might help us address whether their representation in the PPC is purely in extrinsic coordinates or in intrinsic ones as well. Here we investigated the role of the PPC in these processes during an obstacle avoidance task for which the animals had not been explicitly trained. We found that PPC planning activity was predictive of both the spatial and temporal aspects of upcoming trajectories. The same PPC neurons predicted the upcoming trajectory in both endpoint and joint coordinates. The predictive power of these neurons remained stable and accurate despite concomitant motor learning across task conditions. These findings suggest the role of the PPC can be extended from specifying abstract movement goals to expressing these plans as corresponding trajectories in both endpoint and joint coordinates. Thus, the PPC appears to contribute to reach planning and approach-avoidance arm motions at multiple levels of representation. PMID:23730275
Spiegel, Tali; De Bel, Vera; Steverink, Nardi
2016-01-01
This study aims to describe the interplay between the work trajectories and the passing patterns of individuals with degenerative eye conditions in different phases of their career, as well as the disease progression and the career and well-being outcomes associated with different works and passing trajectories. Qualitative interviews on the topic of work trajectories were conducted with 36 working or retired individuals with degenerative eye conditions. The "bigger picture" method was used to explore passing and concealment behavioral patterns, and their associations with various work trajectories. Five patterns of passing and concealment behavior in the workplace were identified and were linked with various work trajectories among visually impaired study participants: (1) no career adjustments, concealed condition throughout career; (2) revealed condition after adjusting career plans; (3) increasingly open about their condition over the course of their career; (4) engaged in career planning, always open about their condition; and (5) engaged in limited career planning, always open about their condition. Patterns characterized by less planning and more identity concealment were associated with more stress and lower levels of self-acceptance, while patterns characterized by more planning for vision deterioration and less passing behavior were associated with higher levels self-acceptance and fewer obstacles over the course of an individual's career. The study's findings can serve as a guide for health professionals. Many individuals with degenerative eye conditions try to conceal their identity as visually impaired in the professional setting. Different aspects of career outcomes (e.g. age of retirement) and wellbeing outcomes (e.g. self-acceptance and stress) associate with identity concealment patterns of individuals throughout their careers. Identifying concealment patterns will allow health professionals to tackle particular adverse outcomes and challenges associated with these patterns.
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-01-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-09-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.
The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization
NASA Astrophysics Data System (ADS)
Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.
2003-12-01
The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.
Hongzhang, Hong; Xiaojuan, Qin; Shengwei, Zhang; Feixiang, Xiang; Yujie, Xu; Haibing, Xiao; Gallina, Kazobinka; Wen, Ju; Fuqing, Zeng; Xiaoping, Zhang; Mingyue, Ding; Huageng, Liang; Xuming, Zhang
2018-05-17
To evaluate the effect of real-time three-dimensional (3D) ultrasonography (US) in guiding percutaneous nephrostomy (PCN). A hydronephrosis model was devised in which the ureters of 16 beagles were obstructed. The beagles were divided equally into groups 1 and 2. In group 1, the PCN was performed using real-time 3D US guidance, while in group 2 the PCN was guided using two-dimensional (2D) US. Visualization of the needle tract, length of puncture time and number of puncture times were recorded for the two groups. In group 1, score for visualization of the needle tract, length of puncture time and number of puncture times were 3, 7.3 ± 3.1 s and one time, respectively. In group 2, the respective results were 1.4 ± 0.5, 21.4 ± 5.8 s and 2.1 ± 0.6 times. The visualization of needle tract in group 1 was superior to that in group 2, and length of puncture time and number of puncture times were both lower in group 1 than in group 2. Real-time 3D US-guided PCN is superior to 2D US-guided PCN in terms of visualization of needle tract and the targeted pelvicalyceal system, leading to quick puncture. Real-time 3D US-guided puncture of the kidney holds great promise for clinical implementation in PCN. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.
Enhanced Lesion Visualization in Image-Guided Noninvasive Surgery With Ultrasound Phased Arrays
2001-10-25
81, 1995. [4] N. Sanghvi et al., “Noninvasive surgery of prostate tissue by high-intensity focused ultrasound ,” IEEE Trans. UFFC, vol. 43, no. 6, pp...ENHANCED LESION VISUALIZATION IN IMAGE-GUIDED NONINVASIVE SURGERY WITH ULTRASOUND PHASED ARRAYS Hui Yao, Pornchai Phukpattaranont and Emad S. Ebbini...Department of Electrical and Computer Engineering University of Minnesota Minneapolis, MN 55455 Abstract- We describe dual-mode ultrasound phased
Supervised guiding long-short term memory for image caption generation based on object classes
NASA Astrophysics Data System (ADS)
Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan
2018-03-01
The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.
Head-mounted eye tracking: a new method to describe infant looking.
Franchak, John M; Kretch, Kari S; Soska, Kasey C; Adolph, Karen E
2011-01-01
Despite hundreds of studies describing infants' visual exploration of experimental stimuli, researchers know little about where infants look during everyday interactions. The current study describes the first method for studying visual behavior during natural interactions in mobile infants. Six 14-month-old infants wore a head-mounted eye-tracker that recorded gaze during free play with mothers. Results revealed that infants' visual exploration is opportunistic and depends on the availability of information and the constraints of infants' own bodies. Looks to mothers' faces were rare following infant-directed utterances but more likely if mothers were sitting at infants' eye level. Gaze toward the destination of infants' hand movements was common during manual actions and crawling, but looks toward obstacles during leg movements were less frequent. © 2011 The Authors. Child Development © 2011 Society for Research in Child Development, Inc.
Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C
2015-08-12
Human parietal cortex plays a central role in encoding visuospatial information and multiple visual maps exist within the intraparietal sulcus (IPS), with each hemisphere symmetrically representing contralateral visual space. Two forms of hemispheric asymmetries have been identified in parietal cortex ventrolateral to visuotopic IPS. Key attentional processes are localized to right lateral parietal cortex in the temporoparietal junction and long-term memory (LTM) retrieval processes are localized to the left lateral parietal cortex in the angular gyrus. Here, using fMRI, we investigate how spatial representations of visuotopic IPS are influenced by stimulus-guided visuospatial attention and by LTM-guided visuospatial attention. We replicate prior findings that a hemispheric asymmetry emerges under stimulus-guided attention: in the right hemisphere (RH), visual maps IPS0, IPS1, and IPS2 code attentional targets across the visual field; in the left hemisphere (LH), IPS0-2 codes primarily contralateral targets. We report the novel finding that, under LTM-guided attention, both RH and LH IPS0-2 exhibit bilateral responses and hemispheric symmetry re-emerges. Therefore, we demonstrate that both hemispheres of IPS0-2 are independently capable of dynamically changing spatial coding properties as attentional task demands change. These findings have important implications for understanding visuospatial and memory-retrieval deficits in patients with parietal lobe damage. The human parietal lobe contains multiple maps of the external world that spatially guide perception, action, and cognition. Maps in each cerebral hemisphere code information from the opposite side of space, not from the same side, and the two hemispheres are symmetric. Paradoxically, damage to specific parietal regions that lack spatial maps can cause patients to ignore half of space (hemispatial neglect syndrome), but only for right (not left) hemisphere damage. Conversely, the left parietal cortex has been linked to retrieval of vivid memories regardless of space. Here, we investigate possible underlying mechanisms in healthy individuals. We demonstrate two forms of dynamic changes in parietal spatial representations: an asymmetric one for stimulus-guided attention and a symmetric one for long-term memory-guided attention. Copyright © 2015 the authors 0270-6474/15/3511358-06$15.00/0.
Elephants know when their bodies are obstacles to success in a novel transfer task
Dale, Rachel; Plotnik, Joshua M.
2017-01-01
The capacity to recognise oneself as separate from other individuals and objects is difficult to investigate in non-human animals. The hallmark empirical assessment, the mirror self-recognition test, focuses on an animal’s ability to recognise itself in a mirror and success has thus far been demonstrated in only a small number of species with a keen interest in their own visual reflection. Adapting a recent study done with children, we designed a new body-awareness paradigm for testing an animal’s understanding of its place in its environment. In this task, Asian elephants (Elephas maximus) were required to step onto a mat and pick up a stick attached to it by rope, and then pass the stick forward to an experimenter. In order to do the latter, the elephants had to see their body as an obstacle to success and first remove their weight from the mat before attempting to transfer the stick. The elephants got off the mat in the test significantly more often than in controls, where getting off the mat was unnecessary. This task helps level the playing field for non-visual species tested on cognition tasks and may help better define the continuum on which body- and self-awareness lie. PMID:28402335
Electronic bracelet and vision-enabled waist-belt for mobility of visually impaired people.
Bhatlawande, Shripad; Sunkari, Amar; Mahadevappa, Manjunatha; Mukhopadhyay, Jayanta; Biswas, Mukul; Das, Debabrata; Gupta, Somedeb
2014-01-01
A wearable assistive system is proposed to improve mobility of visually impaired people (subjects). This system has been implemented in the shape of a bracelet and waist-belt in order to increase its wearable convenience and cosmetic acceptability. A camera and an ultrasonic sensor are attached to a customized waist-belt and bracelet, respectively. The proposed modular system will act as a complementary aid along with a white cane. Its vision-enabled waist-belt module detects the path and distribution of obstacles on the path. This module conveys the required information to a subject via a mono earphone by activating relevant spoken messages. The electronic bracelet module assists the subject to verify this information and to perceive distance of obstacles along with their locations. The proposed complementary system provides an improved understanding of the surrounding environment with less cognitive and perceptual efforts as compared to a white cane alone. This system was subjected to clinical evaluations with 15 totally blind subjects. Results of usability experiments demonstrated effectiveness of the system as a mobility aid. Amongst the participated subjects, 93.33% expressed satisfaction with the information content of this system, 86.66% subjects comprehended its operational convenience, and 80% appreciated the comfort of the system.
Visual-motor recalibration in geographical slant perception
NASA Technical Reports Server (NTRS)
Bhalla, M.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)
1999-01-01
In 4 experiments, it was shown that hills appear steeper to people who are encumbered by wearing a heavy backpack (Experiment 1), are fatigued (Experiment 2), are of low physical fitness (Experiment 3), or are elderly and/or in declining health (Experiment 4). Visually guided actions are unaffected by these manipulations of physiological potential. Although dissociable, the awareness and action systems were also shown to be interconnected. Recalibration of the transformation relating awareness and actions was found to occur over long-term changes in physiological potential (fitness level, age, and health) but not with transitory changes (fatigue and load). Findings are discussed in terms of a time-dependent coordination between the separate systems that control explicit visual awareness and visually guided action.
Guiding principles of value creation through collaborative innovation in pharmaceutical research.
Schweizer, Liang; He, Jeff
2018-02-01
Open innovation has become the main trend in pharmaceutical research. Potential obstacles and pitfalls of collaborations often lead to missed opportunities and/or poorly executed partnerships. This paper aims to provide a framework that facilitates the execution of successful collaborations. We start by mapping out three checkpoints onto early-stage collaborative partnerships: inception, ignition and implementation. Different value types and value drivers are then laid out for each phase of the partnership. We proceed to propose a ratio-driven approach and a value-adjustment mechanism, enhancing the probability of successes in pharmaceutical research collaborations. These guiding principles combined should help the partners either reach agreement more quickly or move on to the next potential project. Copyright © 2017 Elsevier Ltd. All rights reserved.
How to write an educational research grant: AMEE Guide No. 101.
Blanco, Maria A; Gruppen, Larry D; Artino, Anthony R; Uijtdehaage, Sebastian; Szauter, Karen; Durning, Steven J
2016-01-01
Writing an educational research grant in health profession education is challenging, not only for those doing it for the first time but also for more experienced scholars. The intensity of the competition, the peculiarities of the grant format, the risk of rejection, and the time required are among the many obstacles that can prevent educational researchers with interesting and important ideas from writing a grant, that could provide the funding needed to turn their scholarly ideas into reality. The aim of this AMEE Guide is to clarify the grant-writing process by (a) explaining the mechanics and structure of a typical educational research grant proposal, and (b) sharing tips and strategies for making the process more manageable.
Forum Guide to Data Visualization: A Resource for Education Agencies. NFES 2017-016
ERIC Educational Resources Information Center
National Forum on Education Statistics, 2016
2016-01-01
The purpose of this document is to recommend data visualization practices that will help education agencies communicate data meaning in visual formats that are accessible, accurate, and actionable for a wide range of education stakeholders. Although this resource is designed for staff in education agencies, many of the visualization principles…
ERIC Educational Resources Information Center
Laakso, Mikko-Jussi; Myller, Niko; Korhonen, Ari
2009-01-01
In this paper, two emerging learning and teaching methods have been studied: collaboration in concert with algorithm visualization. When visualizations have been employed in collaborative learning, collaboration introduces new challenges for the visualization tools. In addition, new theories are needed to guide the development and research of the…
Bullying 101: The Club Crew's Guide to Bullying Prevention
ERIC Educational Resources Information Center
PACER Center, 2013
2013-01-01
"Bullying 101" is the Club Crew's Guide to Bullying Prevention. A visually-friendly, age-appropriate, 16-page colorful guide for students to read or for parents to use when talking with children, this guide describes and explains what bullying is and is not, the roles of other students, and tips on what each student can do to prevent…
Penilla, Carlos; Tschann, Jeanne M; Sanchez-Vaznaugh, Emma V; Flores, Elena; Ozer, Emily J
2017-11-02
The prevalence of obesity among Latino children is alarmingly high, when compared to non-Latino White children. Low-income Latino parents living in urban areas, even if they are well-educated, face obstacles that shape familial health behaviors. This study used qualitative methods to explore parents' experiences in providing meals and opportunities to play to their children aged 2 to 5 years. In contrast to most prior studies, this study examined perceptions of familial behaviors among both mothers and fathers. An ecological framework for exploring the associations of parental feeding behaviors and children's weight informed this study. An interview guide was developed to explore parents' experiences and perceptions about children's eating and physical activity and administered to six focus groups in a community-based organization in the Mission District of San Francisco. Transcripts were coded and analyzed. Twenty seven mothers and 22 fathers of Latino children ages 2 to 5 participated. Mothers, fathers, and couples reported that employment, day care, neighborhood environments and community relationships were experienced, and perceived as obstacles to promoting health behavior among their children, including drinking water instead of soda and participating in organized playtime with other preschool-age children. Results from this study suggest that the parents' demographic, social and community characteristics influence what and how they feed their children, as well as how often and the types of opportunities they provide for physical activity, providing further evidence that an ecological framework is useful for guiding research with both mothers and fathers. Mothers and fathers identified numerous community and society-level constraints in their urban environments. The results point to the importance of standardized work hours, resources for day care providers, clean and safe streets and parks, strong community relationships, and reduced access to sugar-sweetened beverages in preventing the development of obesity in preschool-age Latino children.
Optical methods for enabling focus cues in head-mounted displays for virtual and augmented reality
NASA Astrophysics Data System (ADS)
Hua, Hong
2017-05-01
Developing head-mounted displays (HMD) that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. Among the many challenges, minimizing visual discomfort is one of the key obstacles. One of the key contributing factors to visual discomfort is the lack of the ability to render proper focus cues in HMDs to stimulate natural eye accommodation responses, which leads to the well-known accommodation-convergence cue discrepancy problem. In this paper, I will provide a summary on the various optical methods approaches toward enabling focus cues in HMDs for both virtual reality (VR) and augmented reality (AR).
Visual environment recognition for robot path planning using template matched filters
NASA Astrophysics Data System (ADS)
Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto
2017-08-01
A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.
NASA Astrophysics Data System (ADS)
Rieder, Christian; Schwier, Michael; Weihusen, Andreas; Zidowitz, Stephan; Peitgen, Heinz-Otto
2009-02-01
Image guided radiofrequency ablation (RFA) is becoming a standard procedure as a minimally invasive method for tumor treatment in the clinical routine. The visualization of pathological tissue and potential risk structures like vessels or important organs gives essential support in image guided pre-interventional RFA planning. In this work our aim is to present novel visualization techniques for interactive RFA planning to support the physician with spatial information of pathological structures as well as the finding of trajectories without harming vitally important tissue. Furthermore, we illustrate three-dimensional applicator models of different manufactures combined with corresponding ablation areas in homogenous tissue, as specified by the manufacturers, to enhance the estimated amount of cell destruction caused by ablation. The visualization techniques are embedded in a workflow oriented application, designed for the use in the clinical routine. To allow a high-quality volume rendering we integrated a visualization method using the fuzzy c-means algorithm. This method automatically defines a transfer function for volume visualization of vessels without the need of a segmentation mask. However, insufficient visualization results of the displayed vessels caused by low data quality can be improved using local vessel segmentation in the vicinity of the lesion. We also provide an interactive segmentation technique of liver tumors for the volumetric measurement and for the visualization of pathological tissue combined with anatomical structures. In order to support coagulation estimation with respect to the heat-sink effect of the cooling blood flow which decreases thermal ablation, a numerical simulation of the heat distribution is provided.
Beyond the cockpit: The visual world as a flight instrument
NASA Technical Reports Server (NTRS)
Johnson, W. W.; Kaiser, M. K.; Foyle, D. C.
1992-01-01
The use of cockpit instruments to guide flight control is not always an option (e.g., low level rotorcraft flight). Under such circumstances the pilot must use out-the-window information for control and navigation. Thus it is important to determine the basis of visually guided flight for several reasons: (1) to guide the design and construction of the visual displays used in training simulators; (2) to allow modeling of visibility restrictions brought about by weather, cockpit constraints, or distortions introduced by sensor systems; and (3) to aid in the development of displays that augment the cockpit window scene and are compatible with the pilot's visual extraction of information from the visual scene. The authors are actively pursuing these questions. We have on-going studies using both low-cost, lower fidelity flight simulators, and state-of-the-art helicopter simulation research facilities. Research results will be presented on: (1) the important visual scene information used in altitude and speed control; (2) the utility of monocular, stereo, and hyperstereo cues for the control of flight; (3) perceptual effects due to the differences between normal unaided daylight vision, and that made available by various night vision devices (e.g., light intensifying goggles and infra-red sensor displays); and (4) the utility of advanced contact displays in which instrument information is made part of the visual scene, as on a 'scene linked' head-up display (e.g., displaying altimeter information on a virtual billboard located on the ground).
Orthogonal on-off control of radar pulses for the suppression of mutual interference
NASA Astrophysics Data System (ADS)
Kim, Yong Cheol
1998-10-01
Intelligent vehicles of the future will be guided by radars and other sensors to avoid obstacles. When multiple vehicles move simultaneously in autonomous navigational mode, mutual interference among car radars becomes a serious problem. An obstacle is illuminated with electromagnetic pulses from several radars. The signal at a radar receiver is actually a mixture of the self-reflection and the reflection of interfering pulses emitted by others. When standardized pulse- type radars are employed on vehicles for obstacle avoidance and so self-pulse and interfering pulses have identical pulse repetition interval, this SI (synchronous Interference) is very difficult to separate from the true reflection. We present a method of suppressing such a synchronous interference. By controlling the pulse emission of a radar in a binary orthogonal ON, OFF pattern, the true self-reflection can be separated from the false one. Two range maps are generated, TRM (true-reflection map) and SIM (synchronous- interference map). TRM is updated for every ON interval and SIM is updated for every OFF interval of the self-radar. SIM represents the SI of interfering radars while TRM keeps a record of a mixture of the true self-reflection and SI. Hence the true obstacles can be identified by the set subtraction operation. The performance of the proposed method is compared with that of the conventional M of N method. Bayesian analysis shows that the probability of false alarm is improved by order of 103 to approximately 106 while the deterioration in the probability of detection is negligible.
Simulators for training in ultrasound guided procedures.
Farjad Sultan, Syed; Shorten, George; Iohom, Gabrielle
2013-06-01
The four major categories of skill sets associated with proficiency in ultrasound guided regional anaesthesia are 1) understanding device operations, 2) image optimization, 3) image interpretation and 4) visualization of needle insertion and injection of the local anesthetic solution. Of these, visualization of needle insertion and injection of local anaesthetic solution can be practiced using simulators and phantoms. This survey of existing simulators summarizes advantages and disadvantages of each. Current deficits pertain to the validation process.
Image fusion and navigation platforms for percutaneous image-guided interventions.
Rajagopal, Manoj; Venkatesan, Aradhana M
2016-04-01
Image-guided interventional procedures, particularly image guided biopsy and ablation, serve an important role in the care of the oncology patient. The need for tumor genomic and proteomic profiling, early tumor response assessment and confirmation of early recurrence are common scenarios that may necessitate successful biopsies of targets, including those that are small, anatomically unfavorable or inconspicuous. As image-guided ablation is increasingly incorporated into interventional oncology practice, similar obstacles are posed for the ablation of technically challenging tumor targets. Navigation tools, including image fusion and device tracking, can enable abdominal interventionalists to more accurately target challenging biopsy and ablation targets. Image fusion technologies enable multimodality fusion and real-time co-displays of US, CT, MRI, and PET/CT data, with navigational technologies including electromagnetic tracking, robotic, cone beam CT, optical, and laser guidance of interventional devices. Image fusion and navigational platform technology is reviewed in this article, including the results of studies implementing their use for interventional procedures. Pre-clinical and clinical experiences to date suggest these technologies have the potential to reduce procedure risk, time, and radiation dose to both the patient and the operator, with a valuable role to play for complex image-guided interventions.
Wavefront-Guided Scleral Lens Prosthetic Device for Keratoconus
Sabesan, Ramkumar; Johns, Lynette; Tomashevskaya, Olga; Jacobs, Deborah S.; Rosenthal, Perry; Yoon, Geunyoung
2016-01-01
Purpose To investigate the feasibility of correcting ocular higher order aberrations (HOA) in keratoconus (KC) using wavefront-guided optics in a scleral lens prosthetic device (SLPD). Methods Six advanced keratoconus patients (11 eyes) were fitted with a SLPD with conventional spherical optics. A custom-made Shack-Hartmann wavefront sensor was used to measure aberrations through a dilated pupil wearing the SLPD. The position of SLPD, i.e. horizontal and vertical decentration relative to the pupil and rotation were measured and incorporated into the design of the wavefront-guided optics for the customized SLPD. A submicron-precision lathe created the designed irregular profile on the front surface of the device. The residual aberrations of the same eyes wearing the SLPD with wavefront-guided optics were subsequently measured. Visual performance with natural mesopic pupil was compared between SLPDs having conventional spherical and wavefront-guided optics by measuring best-corrected high-contrast visual acuity and contrast sensitivity. Results Root-mean-square of HOA(RMS) in the 11 eyes wearing conventional SLPD with spherical optics was 1.17±0.57μm for a 6 mm pupil. HOA were effectively corrected by the customized SLPD with wavefront-guided optics and RMS was reduced 3.1 times on average to 0.37±0.19μm for the same pupil. This correction resulted in significant improvement of 1.9 lines in mean visual acuity (p<0.05). Contrast sensitivity was also significantly improved by a factor of 2.4, 1.8 and 1.4 on average for 4, 8 and 12 cycles/degree, respectively (p<0.05 for all frequencies). Although the residual aberration was comparable to that of normal eyes, the average visual acuity in logMAR with the customized SLPD was 0.21, substantially worse than normal acuity. Conclusions The customized SLPD with wavefront-guided optics corrected the HOA of advanced KC patients to normal levels and improved their vision significantly. PMID:23478630
Needle Steering in 3-D Via Rapid Replanning
Patil, Sachin; Burgner, Jessica; Webster, Robert J.; Alterovitz, Ron
2014-01-01
Steerable needles have the potential to improve the effectiveness of needle-based clinical procedures such as biopsy and drug delivery by improving targeting accuracy and reaching previously inaccessible targets that are behind sensitive or impenetrable anatomical regions. We present a new needle steering system capable of automatically reaching targets in 3-D environments while avoiding obstacles and compensating for real-world uncertainties. Given a specification of anatomical obstacles and a clinical target (e.g., from preoperative medical images), our system plans and controls needle motion in a closed-loop fashion under sensory feedback to optimize a clinical metric. We unify planning and control using a new fast algorithm that continuously replans the needle motion. Our rapid replanning approach is enabled by an efficient sampling-based rapidly exploring random tree (RRT) planner that achieves orders-of-magnitude reduction in computation time compared with prior 3-D approaches by incorporating variable curvature kinematics and a novel distance metric for planning. Our system uses an electromagnetic tracking system to sense the state of the needle tip during the procedure. We experimentally evaluate our needle steering system using tissue phantoms and animal tissue ex vivo. We demonstrate that our rapid replanning strategy successfully guides the needle around obstacles to desired 3-D targets with an average error of less than 3 mm. PMID:25435829
Skating down a steeper slope: Fear influences the perception of geographical slant
Stefanucci, Jeanine K.; Proffitt, Dennis R.; Clore, Gerald L.; Parekh, Nazish
2008-01-01
Conscious awareness of hill slant is overestimated, but visually guided actions directed at hills are relatively accurate. Also, steep hills are consciously estimated to be steeper from the top as opposed to the bottom, possibly because they are dangerous to walk down. In the present study, participants stood at the top of a hill on either a skateboard or a wooden box of the same height. They gave three estimates of the slant of the hill: a verbal report, a visually matched estimate, and a visually guided action. Fear of descending the hill was also assessed. Those participants that were scared (by standing on the skateboard) consciously judged the hill to be steeper relative to participants who were unafraid. However, the visually guided action measure was accurate across conditions. These results suggest that our explicit awareness of slant is influenced by the fear associated with a potentially dangerous action. “[The phobic] reported that as he drove towards bridges, they appeared to be sloping at a dangerous angle.” (Rachman and Cuk 1992 p. 583). PMID:18414594
Tools for Visualizing HIV in Cure Research.
Niessl, Julia; Baxter, Amy E; Kaufmann, Daniel E
2018-02-01
The long-lived HIV reservoir remains a major obstacle for an HIV cure. Current techniques to analyze this reservoir are generally population-based. We highlight recent developments in methods visualizing HIV, which offer a different, complementary view, and provide indispensable information for cure strategy development. Recent advances in fluorescence in situ hybridization techniques enabled key developments in reservoir visualization. Flow cytometric detection of HIV mRNAs, concurrently with proteins, provides a high-throughput approach to study the reservoir on a single-cell level. On a tissue level, key spatial information can be obtained detecting viral RNA and DNA in situ by fluorescence microscopy. At total-body level, advancements in non-invasive immuno-positron emission tomography (PET) detection of HIV proteins may allow an encompassing view of HIV reservoir sites. HIV imaging approaches provide important, complementary information regarding the size, phenotype, and localization of the HIV reservoir. Visualizing the reservoir may contribute to the design, assessment, and monitoring of HIV cure strategies in vitro and in vivo.
[Spatial domain display for interference image dataset].
Wang, Cai-Ling; Li, Yu-Shan; Liu, Xue-Bin; Hu, Bing-Liang; Jing, Juan-Juan; Wen, Jia
2011-11-01
The requirements of imaging interferometer visualization is imminent for the user of image interpretation and information extraction. However, the conventional researches on visualization only focus on the spectral image dataset in spectral domain. Hence, the quick show of interference spectral image dataset display is one of the nodes in interference image processing. The conventional visualization of interference dataset chooses classical spectral image dataset display method after Fourier transformation. In the present paper, the problem of quick view of interferometer imager in image domain is addressed and the algorithm is proposed which simplifies the matter. The Fourier transformation is an obstacle since its computation time is very large and the complexion would be even deteriorated with the size of dataset increasing. The algorithm proposed, named interference weighted envelopes, makes the dataset divorced from transformation. The authors choose three interference weighted envelopes respectively based on the Fourier transformation, features of interference data and human visual system. After comparing the proposed with the conventional methods, the results show the huge difference in display time.
Real-world visual search is dominated by top-down guidance.
Chen, Xin; Zelinsky, Gregory J
2006-11-01
How do bottom-up and top-down guidance signals combine to guide search behavior? Observers searched for a target either with or without a preview (top-down manipulation) or a color singleton (bottom-up manipulation) among the display objects. With a preview, reaction times were faster and more initial eye movements were guided to the target; the singleton failed to attract initial saccades under these conditions. Only in the absence of a preview did subjects preferentially fixate the color singleton. We conclude that the search for realistic objects is guided primarily by top-down control. Implications for saliency map models of visual search are discussed.
Shankar, S; Ellard, C
2000-02-01
Past research has indicated that many species use the time-to-collision variable but little is known about its neural underpinnings in rodents. In a set of three experiments we set out to replicate and extend the findings of Sun et al. (Sun H-J, Carey DP, Goodale MA. Exp Brain Res 1992;91:171-175) in a visually guided task in Mongolian gerbils, and then investigated the effects of lesions to different cortical areas. We trained Mongolian gerbils to run in the dark toward a target on a computer screen. In some trials the target changed in size as the animal ran toward it in such a way as to produce 'virtual targets' if the animals were using time-to-collision or contact information. In experiment 1 we confirmed that gerbils use time-to-contact information to modulate their speed of running toward a target. In experiment 2 we established that visual cortex lesions attenuate the ability of lesioned animals to use information from the visual target to guide their run, while frontal cortex lesioned animals are not as severely affected. In experiment 3 we found that small radio-frequency lesions, of either area VI or of the lateral extrastriate regions of the visual cortex also affected the use of information from the target to modulate locomotion.
High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL.
Stone, John E; Messmer, Peter; Sisneros, Robert; Schulten, Klaus
2016-05-01
Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications.
High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL
Stone, John E.; Messmer, Peter; Sisneros, Robert; Schulten, Klaus
2016-01-01
Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications. PMID:27747137
Mechnical Drawing/Drafting Curriculum Guide.
ERIC Educational Resources Information Center
Gregory, Margaret R.; Benson, Robert T.
This curriculum guide consists of materials for teaching a course in mechanical drawing and drafting. Addressed in the individual units of the guide are the following topics: the nature and scope of drawing and drafting, visualization and spatial relationships, drafting tools and materials, linework, freehand lettering, geometric construction,…
Aviation & Space Education: A Teacher's Resource Guide.
ERIC Educational Resources Information Center
Texas State Dept. of Aviation, Austin.
This resource guide contains information on curriculum guides, resources for teachers, computer software and computer related programs, audio/visual presentations, model aircraft and demonstration aids, training seminars and career education, and an aerospace bibliography for primary grades. Each entry includes all or some of the following items:…
High contrast sensitivity for visually guided flight control in bumblebees.
Chakravarthi, Aravin; Kelber, Almut; Baird, Emily; Dacke, Marie
2017-12-01
Many insects rely on vision to find food, to return to their nest and to carefully control their flight between these two locations. The amount of information available to support these tasks is, in part, dictated by the spatial resolution and contrast sensitivity of their visual systems. Here, we investigate the absolute limits of these visual properties for visually guided position and speed control in Bombus terrestris. Our results indicate that the limit of spatial vision in the translational motion detection system of B. terrestris lies at 0.21 cycles deg -1 with a peak contrast sensitivity of at least 33. In the perspective of earlier findings, these results indicate that bumblebees have higher contrast sensitivity in the motion detection system underlying position control than in their object discrimination system. This suggests that bumblebees, and most likely also other insects, have different visual thresholds depending on the behavioral context.
Barth, Rolf F; Kellough, David A; Allenby, Patricia; Blower, Luke E; Hammond, Scott H; Allenby, Greg M; Buja, L Maximilian
Determination of the degree of stenosis of atherosclerotic coronary arteries is an important part of postmortem examination of the heart, but, unfortunately, estimation of the degree of luminal narrowing can be imprecise and tends to be approximations. Visual guides can be useful to assess this, but earlier attempts to develop such guides did not employ digital technology. Using this approach, we have developed two computer-generated morphometric guides to estimate the degree of luminal narrowing of atherosclerotic coronary arteries. The first is based on symmetric or eccentric circular or crescentic narrowing of the vessel lumen and the second on either slit-like or irregularly shaped narrowing of the vessel lumens. Using the Aperio ScanScope XT at a magnification of 20× we created digital whole-slide images of 20 representative microscopic cross sections of the left anterior descending (LAD) coronary artery, stained with either hematoxylin and eosin (H&E) or Movat's pentachrome stain. These cross sections illustrated a variety of luminal profiles and degrees of stenosis. Three representative types of images were selected and a visual guide was constructed with Adobe Photoshop CS5. Using the "Scale" and "Measurement" tools, we created a series of representations of stenosis with luminal cross sections depicting 20%, 40%, 60%, 70%, 80%, and 90% occlusion of the LAD branch. Four pathologists independently reviewed and scored the degree of atherosclerotic luminal narrowing based on our visual guides. In addition, digital technology was employed to determine the degree of narrowing by measuring the cross-sectional area of the 20 microscopic sections of the vessels, first assuming no narrowing and then comparing this to the percent of narrowing determined by precise measurement. Two of the observers were very experienced general autopsy pathologists, one was a first-year pathology resident on his first rotation on the autopsy service, and the fourth observer was a highly experienced cardiovascular pathologist. Interobserver reliability was assessed by determination of the intraclass correlation coefficient. The degrees of agreement for two H&E- and Movat-stained sections of the LADs from each of 10 decedents were 0.874 and 0.899, respectively, indicating strong interobserver agreement. On the average, the mean visual scores were ~8% less than the morphometric assessment (52.7 vs. 60.2), respectively. The visual guides that we have generated for scoring atherosclerotic luminal narrowing of coronary arteries should be helpful for a broad group of pathologists, from beginning pathology residents to experienced cardiovascular pathologists. Copyright © 2017 Elsevier Inc. All rights reserved.
Automatic detection and classification of obstacles with applications in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Rosas-Miranda, Dario I.
2016-04-01
Hardware implementation of an automatic detection and classification of objects that can represent an obstacle for an autonomous mobile robot using stereo vision algorithms is presented. We propose and evaluate a new method to detect and classify objects for a mobile robot in outdoor conditions. This method is divided in two parts, the first one is the object detection step based on the distance from the objects to the camera and a BLOB analysis. The second part is the classification step that is based on visuals primitives and a SVM classifier. The proposed method is performed in GPU in order to reduce the processing time values. This is performed with help of hardware based on multi-core processors and GPU platform, using a NVIDIA R GeForce R GT640 graphic card and Matlab over a PC with Windows 10.
Lee wave breaking region: the map of instability development scenarios
NASA Astrophysics Data System (ADS)
Yakovenko, S. N.
2017-10-01
Numerical study of a stably stratified flow above the two-dimensional cosine-shaped obstacle has been performed by DNS and LES. These methods were implemented to solve the three-dimensional Navier-Stokes equations in the Boussinesq approximation, together with by the scalar diffusion equation. The results of scanning in the wide ranges of physical parameters (Reynolds and Prandtl/Schmidt numbers relating to laboratory experiment cases and atmospheric or oceanic situations) are presented for instability and turbulence development scenarios in the overturning internal lee waves. The latter is generated by the obstacle in a flow with the constant inflow values of velocity and stable density gradient. Evolution of lee-wave breaking is explored by visualization of velocity and scalar (density) fields, and the analysis of spectra. Based on the numerical simulation results, the power-law dependence on Reynolds number is demonstrated for the wavelength of the most unstable perturbation.
Autonomous Unmanned Helicopter System for Remote Sensing Missions in Unknown Environments
NASA Astrophysics Data System (ADS)
Merz, T.; Chapman, S.
2011-09-01
This paper presents the design of an autonomous unmanned helicopter system for low-altitude remote sensing. The proposed concepts and methods are generic and not limited to a specific helicopter. The development was driven by the need for a dependable, modular, and affordable system with sufficient payload capacity suitable for both research and real-world deployment. The helicopter can be safely operated without a backup pilot in a contained area beyond visual range. This enables data collection in inaccessible or dangerous areas. Thanks to its terrain following and obstacle avoidance capability, the system does not require a priori information about terrain elevation and obstacles. Missions are specified in state diagrams and flight plans. We present performance characteristics of our system and show results of its deployment in real-world scenarios. We have successfully completed several dozen infrastructure inspection missions and crop monitoring missions facilitating plant phenomics studies.
Review of fluorescence guided surgery visualization and overlay techniques
Elliott, Jonathan T.; Dsouza, Alisha V.; Davis, Scott C.; Olson, Jonathan D.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.
2015-01-01
In fluorescence guided surgery, data visualization represents a critical step between signal capture and display needed for clinical decisions informed by that signal. The diversity of methods for displaying surgical images are reviewed, and a particular focus is placed on electronically detected and visualized signals, as required for near-infrared or low concentration tracers. Factors driving the choices such as human perception, the need for rapid decision making in a surgical environment, and biases induced by display choices are outlined. Five practical suggestions are outlined for optimal display orientation, color map, transparency/alpha function, dynamic range compression, and color perception check. PMID:26504628
NASA Technical Reports Server (NTRS)
Hess, Bernhard J M.; Angelaki, Dora E.
2003-01-01
Rotational disturbances of the head about an off-vertical yaw axis induce a complex vestibuloocular reflex pattern that reflects the brain's estimate of head angular velocity as well as its estimate of instantaneous head orientation (at a reduced scale) in space coordinates. We show that semicircular canal and otolith inputs modulate torsional and, to a certain extent, also vertical ocular orientation of visually guided saccades and smooth-pursuit eye movements in a similar manner as during off-vertical axis rotations in complete darkness. It is suggested that this graviceptive control of eye orientation facilitates rapid visual spatial orientation during motion.
A guide to the visual analysis and communication of biomolecular structural data.
Johnson, Graham T; Hertig, Samuel
2014-10-01
Biologists regularly face an increasingly difficult task - to effectively communicate bigger and more complex structural data using an ever-expanding suite of visualization tools. Whether presenting results to peers or educating an outreach audience, a scientist can achieve maximal impact with minimal production time by systematically identifying an audience's needs, planning solutions from a variety of visual communication techniques and then applying the most appropriate software tools. A guide to available resources that range from software tools to professional illustrators can help researchers to generate better figures and presentations tailored to any audience's needs, and enable artistically inclined scientists to create captivating outreach imagery.
Lossnitzer, Dirk; Seitz, Sebastian A; Krautz, Birgit; Schnackenburg, Bernhard; André, Florian; Korosoglou, Grigorios; Katus, Hugo A; Steen, Henning
2015-07-26
To investigate if magnetic resonance (MR)-guided biopsy can improve the performance and safety of such procedures. A novel MR-compatible bioptome was evaluated in a series of in-vitro experiments in a 1.5T magnetic resonance imaging (MRI) system. The bioptome was inserted into explanted porcine and bovine hearts under real-time MR-guidance employing a steady state free precession sequence. The artifact produced by the metal element at the tip and the signal voids caused by the bioptome were visually tracked for navigation and allowed its constant and precise localization. Cardiac structural elements and the target regions for the biopsy were clearly visible. Our method allowed a significantly better spatial visualization of the bioptoms tip compared to conventional X-ray guidance. The specific device design of the bioptome avoided inducible currents and therefore subsequent heating. The novel MR-compatible bioptome provided a superior cardiovascular magnetic resonance (imaging) soft-tissue visualization for MR-guided myocardial biopsies. Not at least the use of MRI guidance for endomyocardial biopsies completely avoided radiation exposure for both patients and interventionalists. MRI-guided endomyocardial biopsies provide a better than conventional X-ray guided navigation and could therefore improve the specificity and reproducibility of cardiac biopsies in future studies.
Kimura, Takeshi; Shiomi, Hiroki; Kuribayashi, Sachio; Isshiki, Takaaki; Kanazawa, Susumu; Ito, Hiroshi; Ikeda, Shunya; Forrest, Ben; Zarins, Christopher K; Hlatky, Mark A; Norgaard, Bjarne L
2015-01-01
Percutaneous coronary intervention (PCI) based on fractional flow reserve (FFRcath) measurement during invasive coronary angiography (CAG) results in improved patient outcome and reduced healthcare costs. FFR can now be computed non-invasively from standard coronary CT angiography (cCTA) scans (FFRCT). The purpose of this study is to determine the potential impact of non-invasive FFRCT on costs and clinical outcomes of patients with suspected coronary artery disease in Japan. Clinical data from 254 patients in the HeartFlowNXT trial, costs of goods and services in Japan, and clinical outcome data from the literature were used to estimate the costs and outcomes of 4 clinical pathways: (1) CAG-visual guided PCI, (2) CAG-FFRcath guided PCI, (3) cCTA followed by CAG-visual guided PCI, (4) cCTA-FFRCT guided PCI. The CAG-visual strategy demonstrated the highest projected cost ($10,360) and highest projected 1-year death/myocardial infarction rate (2.4 %). An assumed price for FFRCT of US $2,000 produced equivalent clinical outcomes (death/MI rate: 1.9 %) and healthcare costs ($7,222) for the cCTA-FFRCT strategy and the CAG-FFRcath guided PCI strategy. Use of the cCTA-FFRCT strategy to select patients for PCI would result in 32 % lower costs and 19 % fewer cardiac events at 1 year compared to the most commonly used CAG-visual strategy. Use of cCTA-FFRCT to select patients for CAG and PCI may reduce costs and improve clinical outcome in patients with suspected coronary artery disease in Japan.
The Effect of Physical Load and Environment on Soldier Performance
2014-02-01
when walking over obstacles compared with standing still with and without a load. Knapik et al. (1990) found significant decrements in military...load carriage (34–61 kg carried 20 km) led to decrements in subsequent physical performance but not in cognitive ability. Crowell et al. (1999) found...with a simultaneous visual navigation task was thought to be advantageous; Wickens’s (1984) multiple resource theory stated that different tasks can
Optoelectronic aid for patients with severely restricted visual fields in daylight conditions
NASA Astrophysics Data System (ADS)
Peláez-Coca, María Dolores; Sobrado-Calvo, Paloma; Vargas-Martín, Fernando
2011-11-01
In this study we evaluated the immediate effectiveness of an optoelectronic visual field expander in a sample of subjects with retinitis pigmentosa suffering from a severe peripheral visual field restriction. The aid uses the augmented view concept and provides subjects with visual information from outside their visual field. The tests were carried out in daylight conditions. The optoelectronic aid comprises a FPGA (real-time video processor), a wide-angle mini camera and a transparent see-through head-mounted display. This optoelectronic aid is called SERBA (Sistema Electro-óptico Reconfigurable de Ayuda para Baja Visión). We previously showed that, without compromising residual vision, the SERBA system provides information about objects within an area about three times greater on average than the remaining visual field of the subjects [1]. In this paper we address the effects of the device on mobility under daylight conditions with and without SERBA. The participants were six subjects with retinitis pigmentosa. In this mobility test, better results were obtained when subjects were wearing the SERBA system; specifically, both the number of contacts with low-level obstacles and mobility errors decreased significantly. A longer training period with the device might improve its usefulness.
Introduction to the MCS. Visual Media Learning Guide.
ERIC Educational Resources Information Center
Spokane Falls Community Coll., WA.
This student learning guide is designed to introduce graphics arts students t the MCS (Modular Composition System) compugraphic typesetting system. Addressed in the individual units of the competency-based guide are the following tasks: programming the compugraphic typesetting system, creating a new file and editing a file, operating a…
Graphic Design Career Guide 2. Revised Edition.
ERIC Educational Resources Information Center
Craig, James
The graphic design field is diverse and includes many areas of specialization. This guide introduces students to career opportunities in graphic design. The guide is organized in four parts. "Part One: Careers in Graphic Design" identifies and discusses the various segments of the graphic design industry, including: Advertising, Audio-Visual, Book…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wybranski, Christian, E-mail: Christian.Wybranski@uk-koeln.de; Pech, Maciej; Lux, Anke
ObjectiveTo assess the feasibility of a hybrid approach employing MRI-guided bile duct (BD) puncture for subsequent fluoroscopy-guided biliary interventions in patients with non-dilated (≤3 mm) or dilated BD (≥3 mm) but unfavorable conditions for ultrasonography (US)-guided BD puncture.MethodsA total of 23 hybrid interventions were performed in 21 patients. Visualization of BD and puncture needles (PN) in the interventional MR images was rated on a 5-point Likert scale by two radiologists. Technical success, planning time, BD puncture time and positioning adjustments of the PN as well as technical success of the biliary intervention and complication rate were recorded.ResultsVisualization even of third-order non-dilated BDmore » and PN was rated excellent by both radiologists with good to excellent interrater agreement. MRI-guided BD puncture was successful in all cases. Planning and BD puncture times were 1:36 ± 2.13 (0:16–11:07) min. and 3:58 ± 2:35 (1:11–9:32) min. Positioning adjustments of the PN was necessary in two patients. Repeated capsular puncture was not necessary in any case. All biliary interventions were completed successfully without major complications.ConclusionA hybrid approach which employs MRI-guided BD puncture for subsequent fluoroscopy-guided biliary intervention is feasible in clinical routine and yields high technical success in patients with non-dilated BD and/or unfavorable conditions for US-guided puncture. Excellent visualization of BD and PN in near-real-time interventional MRI allows successful cannulation of the BD.« less
Fadlallah, Ali; Dirani, Ali; Chelala, Elias; Antonios, Rafic; Cherfan, George; Jarade, Elias
2014-10-01
To evaluate the safety and clinical outcome of combined non-topography-guided photorefractive keratectomy (PRK) and corneal collagen cross-linking (CXL) for the treatment of mild refractive errors in patients with early stage keratoconus. A retrospective, nonrandomized study of patients with early stage keratoconus (stage 1 or 2) who underwent simultaneous non-topography-guided PRK and CXL. All patients had at least 2 years of follow-up. Data were collected preoperatively and postoperatively at the 6-month, 1-year, and 2-year follow-up visit after combined non-topography-guided PRK and CXL. Seventy-nine patients (140 eyes) were included in the study. Combined non-topography-guided PRK and CXL induced a significant improvement in both visual acuity and refraction. Uncorrected distance visual acuity significantly improved from 0.39 ± 0.22 logMAR before combined non-topography-guided PRK and CXL to 0.12 ± 0.14 logMAR at the last follow-up visit (P <.001) and corrected distance visual acuity remained stable (0.035 ± 0.062 logMAR preoperatively vs 0.036 ± 0.058 logMAR postoperatively, P =.79). The mean spherical equivalent decreased from -1.78 ± 1.43 to -0.42 ± 0.60 diopters (D) (P <.001), and the mean cylinder decreased from 1.47 ± 1.10 to 0.83 ± 0.55 D (P <.001). At the last follow-up visit mean keratometry flat was 43.30 ± 1.75 vs 45.62 ± 1.72 D preoperatively (P = .03) and mean keratometry steep was 44.39 ± 3.14 vs 46.53 ± 2.13 D preoperatively (P = .02). Mean central corneal thickness decreased from 501.74 ± 13.11 to 475.93 ± 12.25 µm following combined non-topography-guided PRK and CXL (P < .001). No intraoperative complications occurred. Four eyes developed mild haze that responded well to a short course of topical steroids. No eye developed infectious keratitis. Combined non-topography-guided PRK and CXL is an effective and safe option for correcting mild refractive error and improving visual acuity in patients with early stable keratoconus. Copyright 2014, SLACK Incorporated.
Impaired visually guided weight-shifting ability in children with cerebral palsy.
Ballaz, Laurent; Robert, Maxime; Parent, Audrey; Prince, François; Lemay, Martin
2014-09-01
The ability to control voluntary weight shifting is crucial in many functional tasks. To our knowledge, weight shifting ability in response to a visual stimulus has never been evaluated in children with cerebral palsy (CP). The aim of the study was (1) to propose a new method to assess visually guided medio-lateral (M/L) weight shifting ability and (2) to compare weight-shifting ability in children with CP and typically developing (TD) children. Ten children with spastic diplegic CP (Gross Motor Function Classification System level I and II; age 7-12 years) and 10 TD age-matched children were tested. Participants played with the skiing game on the Wii Fit game console. Center of pressure (COP) displacements, trunk and lower-limb movements were recorded during the last virtual slalom. Maximal isometric lower limb strength and postural control during quiet standing were also assessed. Lower-limb muscle strength was reduced in children with CP compared to TD children and postural control during quiet standing was impaired in children with CP. As expected, the skiing game mainly resulted in M/L COP displacements. Children with CP showed lower M/L COP range and velocity as compared to TD children but larger trunk movements. Trunk and lower extremity movements were less in phase in children with CP compared to TD children. Commercially available active video games can be used to assess visually guided weight shifting ability. Children with spastic diplegic CP showed impaired visually guided weight shifting which can be explained by non-optimal coordination of postural movement and reduced muscular strength. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Ho-Hsing; Wu, Jay; Chuang, Keh-Shih; Kuo, Hsiang-Chi
2007-07-01
Intensity-modulated radiation therapy (IMRT) utilizes nonuniform beam profile to deliver precise radiation doses to a tumor while minimizing radiation exposure to surrounding normal tissues. However, the problem of intrafraction organ motion distorts the dose distribution and leads to significant dosimetric errors. In this research, we applied an aperture adaptive technique with a visual guiding system to toggle the problem of respiratory motion. A homemade computer program showing a cyclic moving pattern was projected onto the ceiling to visually help patients adjust their respiratory patterns. Once the respiratory motion becomes regular, the leaf sequence can be synchronized with the target motion. An oscillator was employed to simulate the patient's breathing pattern. Two simple fields and one IMRT field were measured to verify the accuracy. Preliminary results showed that after appropriate training, the amplitude and duration of volunteer's breathing can be well controlled by the visual guiding system. The sharp dose gradient at the edge of the radiation fields was successfully restored. The maximum dosimetric error in the IMRT field was significantly decreased from 63% to 3%. We conclude that the aperture adaptive technique with the visual guiding system can be an inexpensive and feasible alternative without compromising delivery efficiency in clinical practice.
Eye movements, visual search and scene memory, in an immersive virtual environment.
Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.
Scherman Rydhög, Jonas; Riisgaard de Blanck, Steen; Josipovic, Mirjana; Irming Jølck, Rasmus; Larsen, Klaus Richter; Clementsen, Paul; Lars Andersen, Thomas; Poulsen, Per Rugaard; Fredberg Persson, Gitte; Munck Af Rosenschold, Per
2017-04-01
The purpose of this study was to estimate the uncertainty in voluntary deep-inspiration breath-hold (DIBH) radiotherapy for locally advanced non-small cell lung cancer (NSCLC) patients. Perpendicular fluoroscopic movies were acquired in free breathing (FB) and DIBH during a course of visually guided DIBH radiotherapy of nine patients with NSCLC. Patients had liquid markers injected in mediastinal lymph nodes and primary tumours. Excursion, systematic- and random errors, and inter-breath-hold position uncertainty were investigated using an image based tracking algorithm. A mean reduction of 2-6mm in marker excursion in DIBH versus FB was seen in the anterior-posterior (AP), left-right (LR) and cranio-caudal (CC) directions. Lymph node motion during DIBH originated from cardiac motion. The systematic- (standard deviation (SD) of all the mean marker positions) and random errors (root-mean-square of the intra-BH SD) during DIBH were 0.5 and 0.3mm (AP), 0.5 and 0.3mm (LR), 0.8 and 0.4mm (CC), respectively. The mean inter-breath-hold shifts were -0.3mm (AP), -0.2mm (LR), and -0.2mm (CC). Intra- and inter-breath-hold uncertainty of tumours and lymph nodes were small in visually guided breath-hold radiotherapy of NSCLC. Target motion could be substantially reduced, but not eliminated, using visually guided DIBH. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Lloyd, Steven; Acker, James G.; Prados, Ana I.; Leptoukh, Gregory G.
2008-01-01
One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite-based remote sensing data sets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable data set to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface.
Visualizing the semantic structure in classical music works.
Chan, Wing-Yi; Qu, Huamin; Mak, Wai-Ho
2010-01-01
A major obstacle in the appreciation of classical music is that extensive training is required to understand musical structure and compositional techniques toward comprehending the thoughts behind the musical work. In this paper, we propose an innovative visualization solution to reveal the semantic structure in classical orchestral works such that users can gain insights into musical structure and appreciate the beauty of music. We formulate the semantic structure into macrolevel layer interactions, microlevel theme variations, and macro-micro relationships between themes and layers to abstract the complicated construction of a musical composition. The visualization has been applied with success in understanding some classical music works as supported by highly promising user study results with the general audience and very positive feedback from music students and experts, demonstrating its effectiveness in conveying the sophistication and beauty of classical music to novice users with informative and intuitive displays.
Vibrotactile Feedbacks System for Assisting the Physically Impaired Persons for Easy Navigation
NASA Astrophysics Data System (ADS)
Safa, M.; Geetha, G.; Elakkiya, U.; Saranya, D.
2018-04-01
NAYAN architecture is for a visually impaired person to help for navigation. As well known, all visually impaired people desperately requires special requirements even to access services like the public transportation. This prototype system is a portable device; it is so easy to carry in any conduction to travel through a familiar and unfamiliar environment. The system consists of GPS receiver and it can get NEMA data through the satellite and it is provided to user's Smartphone through Arduino board. This application uses two vibrotactile feedbacks that will be placed in the left and right shoulder for vibration feedback, which gives information about the current location. The ultrasonic sensor is used for obstacle detection which is found in front of the visually impaired person. The Bluetooth modules connected with Arduino board is to send information to the user's mobile phone which it receives from GPS.
Chow, John W; Stokic, Dobrivoje S
2018-03-01
We examined changes in variability, accuracy, frequency composition, and temporal regularity of force signal from vision-guided to memory-guided force-matching tasks in 17 subacute stroke and 17 age-matched healthy subjects. Subjects performed a unilateral isometric knee extension at 10, 30, and 50% of peak torque [maximum voluntary contraction (MVC)] for 10 s (3 trials each). Visual feedback was removed at the 5-s mark in the first two trials (feedback withdrawal), and 30 s after the second trial the subjects were asked to produce the target force without visual feedback (force recall). The coefficient of variation and constant error were used to quantify force variability and accuracy. Force structure was assessed by the median frequency, relative spectral power in the 0-3-Hz band, and sample entropy of the force signal. At 10% MVC, the force signal in subacute stroke subjects became steadier, more broadband, and temporally more irregular after the withdrawal of visual feedback, with progressively larger error at higher contraction levels. Also, the lack of modulation in the spectral frequency at higher force levels with visual feedback persisted in both the withdrawal and recall conditions. In terms of changes from the visual feedback condition, the feedback withdrawal produced a greater difference between the paretic, nonparetic, and control legs than the force recall. The overall results suggest improvements in force variability and structure from vision- to memory-guided force control in subacute stroke despite decreased accuracy. Different sensory-motor memory retrieval mechanisms seem to be involved in the feedback withdrawal and force recall conditions, which deserves further study. NEW & NOTEWORTHY We demonstrate that in the subacute phase of stroke, force signals during a low-level isometric knee extension become steadier, more broadband in spectral power, and more complex after removal of visual feedback. Larger force errors are produced when recalling target forces than immediately after withdrawing visual feedback. Although visual feedback offers better accuracy, it worsens force variability and structure in subacute stroke. The feedback withdrawal and force recall conditions seem to involve different memory retrieval mechanisms.
Hilbert, Sebastian; Sommer, Philipp; Gutberlet, Matthias; Gaspar, Thomas; Foldyna, Borek; Piorkowski, Christopher; Weiss, Steffen; Lloyd, Thomas; Schnackenburg, Bernhard; Krueger, Sascha; Fleiter, Christian; Paetsch, Ingo; Jahnke, Cosima; Hindricks, Gerhard; Grothoff, Matthias
2016-04-01
Recently cardiac magnetic resonance (CMR) imaging has been found feasible for the visualization of the underlying substrate for cardiac arrhythmias as well as for the visualization of cardiac catheters for diagnostic and ablation procedures. Real-time CMR-guided cavotricuspid isthmus ablation was performed in a series of six patients using a combination of active catheter tracking and catheter visualization using real-time MR imaging. Cardiac magnetic resonance utilizing a 1.5 T system was performed in patients under deep propofol sedation. A three-dimensional-whole-heart sequence with navigator technique and a fast automated segmentation algorithm was used for online segmentation of all cardiac chambers, which were thereafter displayed on a dedicated image guidance platform. In three out of six patients complete isthmus block could be achieved in the MR scanner, two of these patients did not need any additional fluoroscopy. In the first patient technical issues called for a completion of the procedure in a conventional laboratory, in another two patients the isthmus was partially blocked by magnetic resonance imaging (MRI)-guided ablation. The mean procedural time for the MR procedure was 109 ± 58 min. The intubation of the CS was performed within a mean time of 2.75 ± 2.21 min. Total fluoroscopy time for completion of the isthmus block ranged from 0 to 7.5 min. The combination of active catheter tracking and passive real-time visualization in CMR-guided electrophysiologic (EP) studies using advanced interventional hardware and software was safe and enabled efficient navigation, mapping, and ablation. These cases demonstrate significant progress in the development of MR-guided EP procedures. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Sleep Disturbances among Persons Who Are Visually Impaired: Survey of Dog Guide Users.
ERIC Educational Resources Information Center
Fouladi, Massoud K.; Moseley, Merrick J.; Jones, Helen S.; Tobin, Michael J.
1998-01-01
A survey completed by 1237 adults with severe visual impairments found that 20% described the quality of their sleep as poor or very poor. Exercise was associated with better sleep and depression with poorer sleep. However, visual acuity did not predict sleep quality, casting doubt on the idea that restricted visual input (light) causes sleep…
Visual Literacy for Libraries: A Practical, Standards-Based Guide
ERIC Educational Resources Information Center
Brown, Nicole E.; Bussert, Kaila; Hattwig, Denise; Medaille, Ann
2016-01-01
The importance of images and visual media in today's culture is changing what it means to be literate in the 21st century. Digital technologies have made it possible for almost anyone to create and share visual media. Yet the pervasiveness of images and visual media does not necessarily mean that individuals are able to critically view, use, and…
Prototyping Visual Learning Analytics Guided by an Educational Theory Informed Goal
ERIC Educational Resources Information Center
Hillaire, Garron; Rappolt-Schlichtmann, Gabrielle; Ducharme, Kim
2016-01-01
Prototype work can support the creation of data visualizations throughout the research and development process through paper prototypes with sketching, designed prototypes with graphic design tools, and functional prototypes to explore how the implementation will work. One challenging aspect of data visualization work is coordinating the expertise…
Are Spatial Visualization Abilities Relevant to Virtual Reality?
ERIC Educational Resources Information Center
Chen, Chwen Jen
2006-01-01
This study aims to investigate the effects of virtual reality (VR)-based learning environment on learners of different spatial visualization abilities. The findings of the aptitude-by-treatment interaction study have shown that learners benefit most from the Guided VR mode, irrespective of their spatial visualization abilities. This indicates that…
Exploring Visual Arts and Crafts Careers. A Student Guidebook.
ERIC Educational Resources Information Center
Dubman, Shelia; And Others
One of six student guidebooks in a series of 11 arts and humanities career exploration guides for grade 7-12 teachers, counselors, and students, this student book on exploration of visual arts and crafts careers presents information on specific occupations in seven different career areas: Visual communications, product design, environmental…
Visually Guided Step Descent in Children with Williams Syndrome
ERIC Educational Resources Information Center
Cowie, Dorothy; Braddick, Oliver; Atkinson, Janette
2012-01-01
Individuals with Williams syndrome (WS) have impairments in visuospatial tasks and in manual visuomotor control, consistent with parietal and cerebellar abnormalities. Here we examined whether individuals with WS also have difficulties in visually controlling whole-body movements. We investigated visual control of stepping down at a change of…
Guiding Visual Attention in Decision Making--Verbal Instructions versus Flicker Cueing
ERIC Educational Resources Information Center
Canal-Bruland, Rouwen
2009-01-01
Perceptual-cognitive processes play an important role in open, fast-paced, interceptive sports such as tennis, basketball, and soccer. Visual information processing has been shown to distinguish skilled from less skilled athletes. Research on the perceptual demands of sports performance has raised questions regarding athletes' visual information…
Learning from Chemical Visualizations: Comparing Generation and Selection
ERIC Educational Resources Information Center
Zhang, Zhihui Helen; Linn, Marcia C.
2013-01-01
Dynamic visualizations can make unseen phenomena such as chemical reactions visible but students need guidance to benefit from them. This study explores the value of generating drawings versus selecting among alternatives to guide students to learn chemical reactions from a dynamic visualization of hydrogen combustion as part of an online inquiry…
Visual perceptual learning by operant conditioning training follows rules of contingency.
Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo
2015-01-01
Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning.
Visual perceptual learning by operant conditioning training follows rules of contingency
Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo
2015-01-01
Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning. PMID:26028984
Tumor-associated myeloid cells as guiding forces of cancer cell stemness.
Sica, Antonio; Porta, Chiara; Amadori, Alberto; Pastò, Anna
2017-08-01
Due to their ability to differentiate into various cell types and to support tissue regeneration, stem cells simultaneously became the holy grail of regenerative medicine and the evil obstacle in cancer therapy. Several studies have investigated niche-related conditions that favor stemness properties and increasingly emphasized their association with an inflammatory environment. Tumor-associated macrophages (TAMs) and myeloid-derived suppressor cells (MDSCs) are major orchestrators of cancer-related inflammation, able to dynamically express different polarized inflammatory programs that promote tumor outgrowth, including tumor angiogenesis, immunosuppression, tissue remodeling and metastasis formation. In addition, these myeloid populations support cancer cell stemness, favoring tumor maintenance and progression, as well as resistance to anticancer treatments. Here, we discuss inflammatory circuits and molecules expressed by TAMs and MDSCs as guiding forces of cancer cell stemness.
Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques
NASA Astrophysics Data System (ADS)
Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.
1999-08-01
A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.
Distributive Education Resource Supplement to the Consumer Education Curriculum Guide for Ohio.
ERIC Educational Resources Information Center
Ohio State Dept. of Education, Columbus. Div. of Vocational Education.
The activities contained in the guide are designed to supplement the distributive education curriculum with information that will prepare the student to become a more informed, skillful employee and help the marketing career oriented student better visualize his customer's buying problems. Four overall objectives are stated. The guide is organized…
Fiscal Officer Training, 1999-2000. Participant's Guide.
ERIC Educational Resources Information Center
Department of Education, Washington, DC.
This guide is intended for use by participants (college fiscal officers, business officers, bursars, loan managers, etc.) in a two-day workshop on Title IV of the reauthorized Higher Education Act. The guide includes copies of the visual displays used in the workshop, space for individual notes, sample forms, sample computer screens, quizzes, and…
Techniques for Daily Living: Curriculum Guides.
ERIC Educational Resources Information Center
Wooldridge, Lillian; And Others
Presented are specific guides concerning techniques for daily living which were developed by the child care staff at the Illinois Braille and Sight Saving School. The guides are designed for cottage parents of the children, who may have both visual and other handicaps, and show what daily living skills are necessary and appropriate for the…
A Visual Arts Guide for Idaho Schools, Grades 7-12.
ERIC Educational Resources Information Center
Idaho State Dept. of Education, Boise.
Approximately 50 art activities for students in junior and senior high school are presented in this curriculum guide. Introductory sections define the roles of school superintendents, principals, art supervisors, and art teachers in supporting art programs, and outline goals and objectives of an art curriculum. The bulk of the guide consists of…
User's Guide for Flight Simulation Data Visualization Workstation
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Chen, Ronnie; Kenney, Patrick S.; Koval, Christopher M.; Hutchinson, Brian K.
1996-01-01
Today's modern flight simulation research produces vast amounts of time sensitive data. The meaning of this data can be difficult to assess while in its raw format. Therefore, a method of breaking the data down and presenting it to the user in a graphical format is necessary. Simulation Graphics (SimGraph) is intended as a data visualization software package that will incorporate simulation data into a variety of animated graphical displays for easy interpretation by the simulation researcher. This document is intended as an end user's guide.
Navigation-guided optic canal decompression for traumatic optic neuropathy: Two case reports.
Bhattacharjee, Kasturi; Serasiya, Samir; Kapoor, Deepika; Bhattacharjee, Harsha
2018-06-01
Two cases of traumatic optic neuropathy presented with profound loss of vision. Both cases received a course of intravenous corticosteroids elsewhere but did not improve. They underwent Navigation guided optic canal decompression via external transcaruncular approach, following which both cases showed visual improvement. Postoperative Visual Evoked Potential and optical coherence technology of Retinal nerve fibre layer showed improvement. These case reports emphasize on the role of stereotactic navigation technology for optic canal decompression in cases of traumatic optic neuropathy.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
Gogoshin, Grigoriy; Boerwinkle, Eric
2017-01-01
Abstract Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology—type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types—single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc. PMID:27681505
Gogoshin, Grigoriy; Boerwinkle, Eric; Rodin, Andrei S
2017-04-01
Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology-type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types-single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc.
Simple control-theoretic models of human steering activity in visually guided vehicle control
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1991-01-01
A simple control theoretic model of human steering or control activity in the lateral-directional control of vehicles such as automobiles and rotorcraft is discussed. The term 'control theoretic' is used to emphasize the fact that the model is derived from a consideration of well-known control system design principles as opposed to psychological theories regarding egomotion, etc. The model is employed to emphasize the 'closed-loop' nature of tasks involving the visually guided control of vehicles upon, or in close proximity to, the earth and to hypothesize how changes in vehicle dynamics can significantly alter the nature of the visual cues which a human might use in such tasks.
Hendrix, Philipp; Senger, Sebastian; Griessenauer, Christoph J; Simgen, Andreas; Linsler, Stefan; Oertel, Joachim
2018-01-01
To report a technique for endoscopic cystoventriculostomy guided by preoperative navigated transcranial magnetic stimulation (nTMS) and tractography in a patient with a large speech eloquent arachnoid cyst. A 74-year old woman presented with a seizure and subsequent persistent anomic aphasia from a progressive left-sided parietal arachnoid cyst. An endoscopic cystoventriculostomy and endoscope-assisted ventricle catheter placement were performed. Surgery was guided by preoperative nTMS and tractography to avoid eloquent language, motor, and visual pathways. Preoperative nTMS motor and language mapping were used to guide tractography of motor and language white matter tracts. The ideal locations of entry point and cystoventriculostomy as well as trajectory for stent-placement were determined preoperatively with a pseudo-3-dimensional model visualizing eloquent language, motor, and visual cortical and subcortical information. The early postoperative course was uneventful. At her 3-month follow-up visit, her language impairments had completely recovered. Additionally, magnetic resonance imaging demonstrated complete collapse of the arachnoid cyst. The combination of nTMS and tractography supports the identification of a safe trajectory for cystoventriculostomy in eloquent arachnoid cysts. Copyright © 2017 Elsevier Inc. All rights reserved.
Effect of visual and tactile feedback on kinematic synergies in the grasping hand.
Patel, Vrajeshri; Burns, Martin; Vinjamuri, Ramana
2016-08-01
The human hand uses a combination of feedforward and feedback mechanisms to accomplish high degree of freedom in grasp control efficiently. In this study, we used a synergy-based control model to determine the effect of sensory feedback on kinematic synergies in the grasping hand. Ten subjects performed two types of grasps: one that included feedback (real) and one without feedback (memory-guided), at two different speeds (rapid and natural). Kinematic synergies were extracted from rapid real and rapid memory-guided grasps using principal component analysis. Synergies extracted from memory-guided grasps revealed greater preservation of natural inter-finger relationships than those found in corresponding synergies extracted from real grasps. Reconstruction of natural real and natural memory-guided grasps was used to test performance and generalizability of synergies. A temporal analysis of reconstruction patterns revealed the differing contribution of individual synergies in real grasps versus memory-guided grasps. Finally, the results showed that memory-guided synergies could not reconstruct real grasps as accurately as real synergies could reconstruct memory-guided grasps. These results demonstrate how visual and tactile feedback affects a closed-loop synergy-based motor control system.
Drivers’ Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving
Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan
2016-01-01
This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers’ visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms. PMID:26784203
Drivers' Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving.
Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan
2016-01-15
This paper describes a real-time motion planner based on the drivers' visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers' visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers' visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms.
Petruno, Sarah K; Clark, Robert E; Reinagel, Pamela
2013-01-01
The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.
Research with Pregnant Women: New Insights on Legal Decision-Making
Mastroianni, Anna C.; Henry, Leslie Meltzer; Robinson, David; Bailey, Theodore; Faden, Ruth R.; Little, Margaret O.; Lyerly, Anne Drapkin
2017-01-01
Although pregnant women rely on medical interventions to treat and prevent a wide variety of health conditions, they are frequently excluded or underrepresented in clinical research. The resulting dearth of pregnancy-specific evidence to guide clinical decisionmaking routinely exposes pregnant women, and their future offspring, to risk of uncertain harms for uncertain benefits. The two legal factors regularly cited as obstacles to such research are the federal regulatory scheme and fear of liability. This article reveals a far more nuanced and complex view of the legal context. First, legal professionals may—at any time from product conception to marketing—influence decisions about research with pregnant women. Second, factors not previously articulated in the literature may prompt legal professionals to slow or halt such research. They include: financial interests, regulatory ambiguity, obstacles to risk management, and site-specific laws unrelated to research. Any efforts to promote the ethical inclusion of pregnant women in research must acknowledge the role of legal decisionmakers and address their professional concerns. PMID:28543423
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michimoto, Kenkichi, E-mail: michikoo@jikei.ac.jp; Shimizu, Kanichiro; Kameoka, Yoshihiko
PurposeTo retrospectively evaluate the feasibility of transcatheter arterial embolization (TAE) using a mixture of absolute ethanol and iodized oil to improve localization of endophytic renal masses on unenhanced computed tomography (CT) prior to CT-guided percutaneous cryoablation (PCA).Materials and MethodsOur institutional review board approved this retrospective study. From September 2011 to June 2015, 17 patients (mean age, 66.8 years) with stage T1a endophytic renal masses (mean diameter, 26.5 mm) underwent TAE using a mixture of absolute ethanol and iodized oil to improve visualization of small and endophytic renal masses on unenhanced CT prior to CT-guided PCA. TAE was considered successful that accumulated iodizedmore » oil depicted whole of the tumor edge on CT. PCA was considered successful when the iceball covered the entire tumor with over a 5 mm margin. Oncological and renal functional outcomes and complications were also evaluated.ResultsTAE was successfully performed in 16 of 17 endophytic tumors. The 16 tumors were performed under CT-guided PCA with their distinct visualization of localization and safe ablated margin. During the mean follow-up period of 15.4 ± 5.1 months, one patient developed local recurrence. Estimated glomerular filtration rate declined by 8 % with statistical significance (P = 0.01). There was no procedure-related significant complication.ConclusionTAE using a mixture of absolute ethanol and iodized oil to improve visualization of endophytic renal masses facilitated tumor localization on unenhanced CT, permitting depiction of the tumor edge as well as a safe margin for ablation during CT-guided PCA, with an acceptable decline in renal function.« less
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
3D Scientific Visualization with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2015-03-01
This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.
Vision-guided ocular growth in a mutant chicken model with diminished visual acuity
Ritchey, Eric R.; Zelinka, Christopher; Tang, Junhua; Liu, Jun; Code, Kimberly A.; Petersen-Jones, Simon; Fischer, Andy J.
2012-01-01
Visual experience is known to guide ocular growth. We tested the hypothesis that vision-guided ocular growth is disrupted in a model system with diminished visual acuity. We examine whether ocular elongation is influenced by form-deprivation (FD) and lens-imposed defocus in the Retinopathy, Globe Enlarged (RGE) chicken. Young RGE chicks have poor visual acuity, without significant retinal pathology, resulting from a mutation in guanine nucleotide-binding protein β3 (GNB3), also known as transducin β3 or Gβ3. The mutation in GNB3 destabilizes the protein and causes a loss of Gβ3 from photoreceptors and ON-bipolar cells. (Ritchey et al. 2010)FD increased ocular elongation in RGE eyes in a manner similar to that seen in wild-type (WT) eyes. By comparison, the excessive ocular elongation that results from hyperopic defocus was increased, whereas myopic defocus failed to significantly decrease ocular elongation in RGE eyes. Brief daily periods of unrestricted vision interrupting FD prevented ocular elongation in RGE chicks in a manner similar to that seen in WT chicks. Glucagonergic amacrine cells differentially expressed the immediate early gene Egr1 in response to growth-guiding stimuli in RGE retinas, but the defocus-dependent up-regulation of Egr1 was lesser in RGE retinas compared to that of WT retinas. We conclude that high visual acuity, and the retinal signaling mediated by Gβ3, is not required for emmetropization and the excessive ocular elongation caused by FD and hyperopic defocus. However, the loss of acuity and Gβ3 from RGE retinas causes enhanced responses to hyperopic defocus and diminished responses to myopic defocus. PMID:22824538
Virtual Worlds, Virtual Literacy: An Educational Exploration
ERIC Educational Resources Information Center
Stoerger, Sharon
2008-01-01
Virtual worlds enable students to learn through seeing, knowing, and doing within visually rich and mentally engaging spaces. Rather than reading about events, students become part of the events through the adoption of a pre-set persona. Along with visual feedback that guides the players' activities and the development of visual skills, visual…
Using Visual Literacy to Teach Science Academic Language: Experiences from Three Preservice Teachers
ERIC Educational Resources Information Center
Kelly-Jackson, Charlease; Delacruz, Stacy
2014-01-01
This original pedagogical study captured three preservice teachers' experiences using visual literacy strategies as an approach to teaching English language learners (ELLs) science academic language. The following research questions guided this study: (1) What are the experiences of preservice teachers' use of visual literacy to teach science…
Task Demands Control Acquisition and Storage of Visual Information
ERIC Educational Resources Information Center
Droll, Jason A.; Hayhoe, Mary M.; Triesch, Jochen; Sullivan, Brian T.
2005-01-01
Attention and working memory limitations set strict limits on visual representations, yet researchers have little appreciation of how these limits constrain the acquisition of information in ongoing visually guided behavior. Subjects performed a brick sorting task in a virtual environment. A change was made to 1 of the features of the brick being…
Self-Monitoring of Gaze in High Functioning Autism
ERIC Educational Resources Information Center
Grynszpan, Ouriel; Nadel, Jacqueline; Martin, Jean-Claude; Simonin, Jerome; Bailleul, Pauline; Wang, Yun; Gepner, Daniel; Le Barillier, Florence; Constant, Jacques
2012-01-01
Atypical visual behaviour has been recently proposed to account for much of social misunderstanding in autism. Using an eye-tracking system and a gaze-contingent lens display, the present study explores self-monitoring of eye motion in two conditions: free visual exploration and guided exploration via blurring the visual field except for the focal…
Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments
ERIC Educational Resources Information Center
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…
Evidence from Visuomotor Adaptation for Two Partially Independent Visuomotor Systems
ERIC Educational Resources Information Center
Thaler, Lore; Todd, James T.
2010-01-01
Visual information can specify spatial layout with respect to the observer (egocentric) or with respect to an external frame of reference (allocentric). People can use both of these types of visual spatial information to guide their hands. The question arises if movements based on egocentric and movements based on allocentric visual information…
The Preference of Visualization in Teaching and Learning Absolute Value
ERIC Educational Resources Information Center
Konyalioglu, Alper Cihan; Aksu, Zeki; Senel, Esma Ozge
2012-01-01
Visualization is mostly despised although it complements and--sometimes--guides the analytical process. This study mainly investigates teachers' preferences concerning the use of the visualization method and determines the extent to which they encourage their students to make use of it within the problem-solving process. This study was conducted…
Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment
Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905
Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.
Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong
2016-08-01
The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.
Self-Study and Evaluation Guide/1968 Edition. Section D-3: Rehabilitation Centers.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on rehabilitation centers is one of 28 guides designed for organizations undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by the self-evaluation…
Self-Study and Evaluation Guide/1979 Edition. Section B-1: Agency Profile.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This guide on developing an agency profile is one of 28 guides designed for organizations serving the blind and the visually handicapped who are undertaking a self-study as part of the process for accreditation by the National Accreditation Council (NAC). Instructions for preparing a packet of informative data and material for advance study by…
Self-Study and Evaluation Guide/1977 Edition. Section D-8: Rehabilitation Teaching Services.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on rehabilitation teaching services is one of 28 guides designed for organizations who are undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by…
Self-Study and Evaluation Guide [1976 Edition]. Section D-4: Workshop Services.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on workshop service is one of twenty-eight guides designed for organizations who are undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by the…
Self-Study and Evaluation Guide/[1975 Edition]. Section D-6: Vocational Services.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on vocational services is one of 28 guides designed for organizations who are undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by the…
ERIC Educational Resources Information Center
Sung, Y.-T.; Hou, H.-T.; Liu, C.-K.; Chang, K.-E.
2010-01-01
Mobile devices have been increasingly utilized in informal learning because of their high degree of portability; mobile guide systems (or electronic guidebooks) have also been adopted in museum learning, including those that combine learning strategies and the general audio-visual guide systems. To gain a deeper understanding of the features and…
Vision for navigation: What can we learn from ants?
Graham, Paul; Philippides, Andrew
2017-09-01
The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
3D Scientific Visualization with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2015-03-01
This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.
El Darawany, Hamed; Barakat, Alaa; Madi, Maha Al; Aldamanhori, Reem; Al Otaibi, Khalid; Al-Zahrani, Ali A
2016-01-01
Inserting a guide wire is a common practice during endo-urological procedures. A rare complication in patients with ureteral stones where an iatrogenic submucosal tunnel (IST) is created during endoscopic guide wire placement. Summarize data on IST. Retrospective descriptive study of patients treated from from October 2009 until January 2015. King Fahd Hospital of the University, Al-Khobar, Saudi Arabia. Patients with ureteral stones were divided to 2 groups. In group I (335 patients), the ureteral stones were removed by ureteroscopy in one stage. Group II (97 patients) had a 2-staged procedure starting with a double J-stent placement for kidney drainage followed within 3 weeks with ureteroscopic stone removal. Endoscopic visualization of ureteric submucosal tunneling by guide wire. IST occurred in 9/432 patients with ureteral stones (2.1%). The diagnosis in group I was made during ureteroscopy by direct visualization of a vanishing guide wire at the level of the stone (6 patients). In group II, IST was suspected when renal pain was not relieved after placement of the double J-stent or if imaging by ultrasound or intravenous urography showed persistent back pressure to the obstructed kidney (3 patients). The condition was subsequently confirmed by ureteroscopy. Forceful advancement of the guide wire in an inflamed and edematous ureteral segment impacted by a stone is probably the triggering factor for development of IST. Definitive diagnosis is possible only by direct visualization during ureteroscopy. Awareness of this potential complication is important to guard against its occurrence. Relatively small numbers of subjects and the retrospective nature of the study.
Overcoming the obstacles: Life stories of scientists with learning disabilities
NASA Astrophysics Data System (ADS)
Force, Crista Marie
Scientific discovery is at the heart of solving many of the problems facing contemporary society. Scientists are retiring at rates that exceed the numbers of new scientists. Unfortunately, scientific careers still appear to be outside the reach of most individuals with learning disabilities. The purpose of this research was to better understand the methods by which successful learning disabled scientists have overcome the barriers and challenges associated with their learning disabilities in their preparation and performance as scientists. This narrative inquiry involved the researcher writing the life stories of four scientists. These life stories were generated from extensive interviews in which each of the scientists recounted their life histories. The researcher used narrative analysis to "make sense" of these learning disabled scientists' life stories. The narrative analysis required the researcher to identify and describe emergent themes characterizing each scientist's life. A cross-case analysis was then performed to uncover commonalities and differences in the lives of these four individuals. Results of the cross-case analysis revealed that all four scientists had a passion for science that emerged at an early age, which, with strong drive and determination, drove these individuals to succeed in spite of the many obstacles arising from their learning disabilities. The analysis also revealed that these scientists chose careers based on their strengths; they actively sought mentors to guide them in their preparation as scientists; and they developed coping techniques to overcome difficulties and succeed. The cross-case analysis also revealed differences in the degree to which each scientist accepted his or her learning disability. While some demonstrated inferior feelings about their successes as scientists, still other individuals revealed feelings of having superior abilities in areas such as visualization and working with people. These individuals revealed beliefs that they developed these special abilities as a result of their learning differences, which made them better than their non-learning disabled peers in certain areas. Finally, the researcher discusses implications of these findings in the light of special accommodations that can be made by teachers, school counselors, and parents to encourage learning disabled children who demonstrate interest in becoming scientists.
Priming and the guidance by visual and categorical templates in visual search.
Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L
2014-01-01
Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.
Search guidance is proportional to the categorical specificity of a target cue.
Schmidt, Joseph; Zelinsky, Gregory J
2009-10-01
Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.
Screening Algorithm to Guide Decisions on Whether to Conduct a Health Impact Assessment
Provides a visual aid in the form of a decision algorithm that helps guide discussions about whether to proceed with an HIA. The algorithm can help structure, standardize, and document the decision process.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Force sensor attachable to thin fiberscopes/endoscopes utilizing high elasticity fabric.
Watanabe, Tetsuyou; Iwai, Takanobu; Fujihira, Yoshinori; Wakako, Lina; Kagawa, Hiroyuki; Yoneyama, Takeshi
2014-03-12
An endoscope/fiberscope is a minimally invasive tool used for directly observing tissues in areas deep inside the human body where access is limited. However, this tool only yields visual information. If force feedback information were also available, endoscope/fiberscope operators would be able to detect indurated areas that are visually hard to recognize. Furthermore, obtaining such feedback information from tissues in areas where collecting visual information is a challenge would be highly useful. The major obstacle is that such force information is difficult to acquire. This paper presents a novel force sensing system that can be attached to a very thin fiberscope/endoscope. To ensure a small size, high resolution, easy sterilization, and low cost, the proposed force visualization-based system uses a highly elastic material-panty stocking fabric. The paper also presents the methodology for deriving the force value from the captured image. The system has a resolution of less than 0.01 N and sensitivity of greater than 600 pixels/N within the force range of 0-0.2 N.
Vision drives accurate approach behavior during prey capture in laboratory mice
Hoy, Jennifer L.; Yavorska, Iryna; Wehr, Michael; Niell, Cristopher M.
2016-01-01
Summary The ability to genetically identify and manipulate neural circuits in the mouse is rapidly advancing our understanding of visual processing in the mammalian brain [1,2]. However, studies investigating the circuitry that underlies complex ethologically-relevant visual behaviors in the mouse have been primarily restricted to fear responses [3–5]. Here, we show that a laboratory strain of mouse (Mus musculus, C57BL/6J) robustly pursues, captures and consumes live insect prey, and that vision is necessary for mice to perform the accurate orienting and approach behaviors leading to capture. Specifically, we differentially perturbed visual or auditory input in mice and determined that visual input is required for accurate approach, allowing maintenance of bearing to within 11 degrees of the target on average during pursuit. While mice were able to capture prey without vision, the accuracy of their approaches and capture rate dramatically declined. To better explore the contribution of vision to this behavior, we developed a simple assay that isolated visual cues and simplified analysis of the visually guided approach. Together, our results demonstrate that laboratory mice are capable of exhibiting dynamic and accurate visually-guided approach behaviors, and provide a means to estimate the visual features that drive behavior within an ethological context. PMID:27773567
The contributions of vision and haptics to reaching and grasping
Stone, Kayla D.; Gonzalez, Claudia L. R.
2015-01-01
This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference. PMID:26441777
Journal of Rehabilitation Research and Development Progress Reports 1994, Volume 32, June 1995
1995-06-01
Stepping Over an Obstacle: Effect of Reduced Visual Field 50 Effect of Reduced Optic Flow on Gait 51 Effects of Robotic-Assisted Weight Support on Gait...Geometry in Hip Replacement 240 Wear Debris Generation in Hip Modular Head and Neck Components 241 Changes in Bone Blood Flow Associated with...rectangular cross-section to form a continuously flowing ribbon of melted plastic. Rib- bon dimensions are 0.75 mm thick and 5 mm wide, corresponding to
Memory-guided force control in healthy younger and older adults.
Neely, Kristina A; Samimy, Shaadee; Blouch, Samantha L; Wang, Peiyuan; Chennavasin, Amanda; Diaz, Michele T; Dennis, Nancy A
2017-08-01
Successful performance of a memory-guided motor task requires participants to store and then recall an accurate representation of the motor goal. Further, participants must monitor motor output to make adjustments in the absence of visual feedback. The goal of this study was to examine memory-guided grip force in healthy younger and older adults and compare it to performance on behavioral tasks of working memory. Previous work demonstrates that healthy adults decrease force output as a function of time when visual feedback is not available. We hypothesized that older adults would decrease force output at a faster rate than younger adults, due to age-related deficits in working memory. Two groups of participants, younger adults (YA: N = 32, mean age 21.5 years) and older adults (OA: N = 33, mean age 69.3 years), completed four 20-s trials of isometric force with their index finger and thumb, equal to 25% of their maximum voluntary contraction. In the full-vision condition, visual feedback was available for the duration of the trial. In the no vision condition, visual feedback was removed for the last 12 s of each trial. Participants were asked to maintain constant force output in the absence of visual feedback. Participants also completed tasks of word recall and recognition and visuospatial working memory. Counter to our predictions, when visual feedback was removed, younger adults decreased force at a faster rate compared to older adults and the rate of decay was not associated with behavioral performance on tests of working memory.
Move with Me: A Parents' Guide to Movement Development for Visually Impaired Babies.
ERIC Educational Resources Information Center
Blind Childrens Center, Los Angeles, CA.
This booklet presents suggestions for parents to promote their visually impaired infant's motor development. It is pointed out that babies with serious visual loss often prefer their world to be constant and familiar and may resist change (including change in position); therefore, it is important that a wide range of movement activities be…
Studies of Visual Attention in Physics Problem Solving
ERIC Educational Resources Information Center
Madsen, Adrian M.
2013-01-01
The work described here represents an effort to understand and influence visual attention while solving physics problems containing a diagram. Our visual system is guided by two types of processes--top-down and bottom-up. The top-down processes are internal and determined by ones prior knowledge and goals. The bottom-up processes are external and…
ERIC Educational Resources Information Center
Kapperman, Gaylen; Kelly, Stacy M.
2013-01-01
Individuals with visual impairments (that is, those who are blind or have low vision) do not have the same opportunities to develop their knowledge of sexual health and participate in sex education as their sighted peers (Krupa & Esmail, 2010), although young adults with visual impairments participate in sexual activities at similar rates as their…
Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search
ERIC Educational Resources Information Center
Calvo, Manuel G.; Nummenmaa, Lauri
2008-01-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…
An Exploratory Study of Interactivity in Visualization Tools: "Flow" of Interaction
ERIC Educational Resources Information Center
Liang, Hai-Ning; Parsons, Paul C.; Wu, Hsien-Chi; Sedig, Kamran
2010-01-01
This paper deals with the design of interactivity in visualization tools. There are several factors that can be used to guide the analysis and design of the interactivity of these tools. One such factor is flow, which is concerned with the duration of interaction with visual representations of information--interaction being the actions performed…
Intelligence Level Performance Standards Research for Autonomous Vehicles
Bostelman, Roger B.; Hong, Tsai H.; Messina, Elena
2017-01-01
United States and European safety standards have evolved to protect workers near Automatic Guided Vehicles (AGV’s). However, performance standards for AGV’s and mobile robots have only recently begun development. Lessons can be learned from research and standards efforts for mobile robots applied to emergency response and military applications. Research challenges, tests and evaluations, and programs to develop higher intelligence levels for vehicles can also used to guide industrial AGV developments towards more adaptable and intelligent systems. These other efforts also provide useful standards development criteria for AGV performance test methods. Current standards areas being considered for AGVs are for docking, navigation, obstacle avoidance, and the ground truth systems that measure performance. This paper provides a look to the future with standards developments in both the performance of vehicles and the dynamic perception systems that measure intelligent vehicle performance. PMID:28649189
Intelligence Level Performance Standards Research for Autonomous Vehicles.
Bostelman, Roger B; Hong, Tsai H; Messina, Elena
2015-01-01
United States and European safety standards have evolved to protect workers near Automatic Guided Vehicles (AGV's). However, performance standards for AGV's and mobile robots have only recently begun development. Lessons can be learned from research and standards efforts for mobile robots applied to emergency response and military applications. Research challenges, tests and evaluations, and programs to develop higher intelligence levels for vehicles can also used to guide industrial AGV developments towards more adaptable and intelligent systems. These other efforts also provide useful standards development criteria for AGV performance test methods. Current standards areas being considered for AGVs are for docking, navigation, obstacle avoidance, and the ground truth systems that measure performance. This paper provides a look to the future with standards developments in both the performance of vehicles and the dynamic perception systems that measure intelligent vehicle performance.
Self-Study and Evaluation Guide/1968 Edition. Section D-5: Social Services. (Revised 1977).
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on social services is one of twenty-eight guides designed for organizations who are undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by the…
Self-Study and Evaluation Guide/1977 Edition. Section D-2A: Orientation and Mobility Services.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on orientation and mobility services is one of 28 guides designed for organizations undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by the…
An evaluation of the experiences of guide dog owners visiting Scottish veterinary practices.
Fraser, M; Girling, S J
2016-09-10
Guide dogs and their owners will visit a veterinary practice at least twice a year. The aim of this study was to evaluate what guide dog owners thought about these visits, in order to identify areas of good practice which could be incorporated into the undergraduate curriculum. Nine guide dog owners volunteered to take part in the study and were interviewed by the primary researcher. Thematic analysis was carried out and several themes were identified: good experiences were highlighted where staff had an understanding of visual impairment and the work of a guide dog; the importance of good communication skills involving the owner in the consultation; the need for veterinary professionals to understand the bond between an owner and guide dog; how medication and information could be provided in a user-friendly format for someone affected by a visual impairment and concerns about costs and decision making for veterinary treatment. This work highlights the importance for veterinary staff to talk to, empathise with and understand the individual circumstances of their clients and identifies areas that should be included in veterinary education to better prepare students for the workplace. British Veterinary Association.
Reference Collections and Standards.
ERIC Educational Resources Information Center
Winkel, Lois
1999-01-01
Reviews six reference materials for young people: "The New York Public Library Kid's Guide to Research"; "National Audubon Society First Field Guide. Mammals"; "Star Wars: The Visual Dictionary"; "Encarta Africana"; "World Fact Book, 1998"; and "Factastic Book of 1001 Lists". Includes ordering information.(AEF)
Monaco, Simona; Gallivan, Jason P; Figley, Teresa D; Singhal, Anthony; Culham, Jody C
2017-11-29
The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay. SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it. Copyright © 2017 the authors 0270-6474/17/3711572-20$15.00/0.
Linander, Nellie; Dacke, Marie; Baird, Emily
2015-04-01
When flying through narrow spaces, insects control their position by balancing the magnitude of apparent image motion (optic flow) experienced in each eye and their speed by holding this value about a desired set point. Previously, it has been shown that when bumblebees encounter sudden changes in the proximity to nearby surfaces - as indicated by a change in the magnitude of optic flow on each side of the visual field - they adjust their flight speed well before the change, suggesting that they measure optic flow for speed control at low visual angles in the frontal visual field. Here, we investigated the effect that sudden changes in the magnitude of translational optic flow have on both position and speed control in bumblebees if these changes are asymmetrical; that is, if they occur only on one side of the visual field. Our results reveal that the visual region over which bumblebees respond to optic flow cues for flight control is not dictated by a set viewing angle. Instead, bumblebees appear to use the maximum magnitude of translational optic flow experienced in the frontal visual field. This strategy ensures that bumblebees use the translational optic flow generated by the nearest obstacles - that is, those with which they have the highest risk of colliding - to control flight. © 2015. Published by The Company of Biologists Ltd.
Hitchcock, Elaine R.; Ferron, John
2017-01-01
Purpose Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. Method This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Conclusions Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders. PMID:28595354
Byun, Tara McAllister; Hitchcock, Elaine R; Ferron, John
2017-06-10
Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders.
Three-dimensional landing zone ladar
NASA Astrophysics Data System (ADS)
Savage, James; Goodrich, Shawn; Burns, H. N.
2016-05-01
Three-Dimensional Landing Zone (3D-LZ) refers to a series of Air Force Research Laboratory (AFRL) programs to develop high-resolution, imaging ladar to address helicopter approach and landing in degraded visual environments with emphasis on brownout; cable warning and obstacle avoidance; and controlled flight into terrain. Initial efforts adapted ladar systems built for munition seekers, and success led to a the 3D-LZ Joint Capability Technology Demonstration (JCTD) , a 27-month program to develop and demonstrate a ladar subsystem that could be housed with the AN/AAQ-29 FLIR turret flown on US Air Force Combat Search and Rescue (CSAR) HH-60G Pave Hawk helicopters. Following the JCTD flight demonstration, further development focused on reducing size, weight, and power while continuing to refine the real-time geo-referencing, dust rejection, obstacle and cable avoidance, and Helicopter Terrain Awareness and Warning (HTAWS) capability demonstrated under the JCTD. This paper summarizes significant ladar technology development milestones to date, individual LADAR technologies within 3D-LZ, and results of the flight testing.
1982-06-01
for use by intellignece analysts in field operations, tactical training, and academic DD I 1473wu EDITION OF f OV 65 IS OBSOLETE UNCLASSIFIED SECURITY...Category Factor t.i Fields of fire 1.2 Cover and concealment 1.0 TERRAIN FACTORS 1. oblt 1.4 Seize/deny key terrain 1.5 Observation 1.o Natural/ artificial ...provisions of terrain. - 1.6 Exploits or accommodates natural and artificial obstacles. 2.0 U.S. FORCE FACTORS *: As related to mission accomplishment
Konrad, Shelley Cohen; Browning, David M
2012-01-01
Theories and traditions emphasizing the centrality of caring have guided the evolution of the healthcare professions. In contemporary practice, creating a therapeutic context in which healing can occur relies not just on the caring dispositions of individual clinicians, but also on the collective relational capacities of interprofessional healthcare teams. This article describes the intersection and complementarity of relational and interprofessional learning approaches to health education, provides exemplars of shared learning models and discusses the benefits and obstacles to integrating relational and interprofessional philosophies into real world practice.
STEMujeres: A case study of the life stories of first-generation Latina engineers and scientists
NASA Astrophysics Data System (ADS)
Vielma, Karina I.
Research points to the many obstacles that first-generation, Latina students face when attempting to enter fields in science, technology, engineering, and mathematics, STEM. This qualitative, case study examined the personal and educational experiences of first-generation Latina women who successfully navigated the STEM educational pipeline earning bachelor's, master's, and doctoral degrees in various fields of engineering. Three research questions guided the study: (1) How does a first-generation Latina engineer and scientist describe her life experiences as she became interested in STEM? (2) How does she describe her educational experiences as she navigated the educational pipeline in the physics, mathematics, and/or engineering field(s)? (3) How did she respond to challenges, obstacles and microaggressions, if any, while navigating the STEM educational pipeline? The study was designed using a combination of Critical Race Theory frameworks---Chicana feminist theory and racial microaggressions. Through a life history case study approach, the women shared their stories of success. With the participants' help, influential persons in their educational paths were identified and interviewed. Data were analyzed using crystallization and thematic results indicated that all women in this study identified their parents as planting the seed of interest through the introduction of mathematics. The women unknowingly prepared to enter the STEM fields by taking math and science coursework. They were guided to apply to STEM universities and academic programs by others who knew about their interest in math and science including teachers, counselors, and level-up peers---students close in age who were just a step more advanced in the educational pipeline. The women also drew from previous familial struggles to guide their perseverance and motivation toward educational degree completion. The lives of the women where complex and intersected with various forms of racism including gender, race, class, legality and power. In many instances, the women used their knowledge to help other STEMujeres advance.
Reading Digital with Low Vision
Legge, Gordon E.
2017-01-01
Reading difficulty is a major consequence of vision loss for more than four million Americans with low vision. Difficulty in accessing print imposes obstacles to education, employment, social interaction and recreation. In recent years, research in vision science has made major strides in understanding the impact of low vision on reading, and the dependence of reading performance on text properties. The ongoing transition to the production and distribution of digital documents brings about new opportunities for people with visual impairment. Digital documents on computers and mobile devices permit customization of print size, spacing, font style, contrast polarity and page layout to optimize reading displays for people with low vision. As a result, we now have unprecedented opportunities to adapt text format to meet the needs of visually impaired readers. PMID:29242668
Visual Scan Adaptation During Repeated Visual Search
2010-01-01
Junge, J. A. (2004). Searching for stimulus-driven shifts of attention. Psychonomic Bulletin & Review , 11, 876–881. Furst, C. J. (1971...search strategies cannot override attentional capture. Psychonomic Bulletin & Review , 11, 65–70. Wolfe, J. M. (1994). Guided search 2.0: A revised model...of visual search. Psychonomic Bulletin & Review , 1, 202–238. Wolfe, J. M. (1998a). Visual search. In H. Pashler (Ed.), Attention (pp. 13–73). East
Operational Symbols: Can a Picture Be Worth a Thousand Words?
1991-04-01
internal visualization, because forms are to visual communication what words are to verbal communication. From a psychological point of view, the process... Visual Communication . Washington, DC: National Education Association, 1960. Bohannan, Anthony G. "C31 In Support of the Land Commander," in Principles...captions guide what is learned from a picture or graphic. 40. John C. Ball and Francis C. Byrnes, ed., Research, Principles, and Practices in Visual
Wavefront-Guided Scleral Lens Correction in Keratoconus
Marsack, Jason D.; Ravikumar, Ayeswarya; Nguyen, Chi; Ticak, Anita; Koenig, Darren E.; Elswick, James D.; Applegate, Raymond A.
2014-01-01
Purpose To examine the performance of state-of-the-art wavefront-guided scleral contact lenses (wfgSCLs) on a sample of keratoconic eyes, with emphasis on performance quantified with visual quality metrics; and to provide a detailed discussion of the process used to design, manufacture and evaluate wfgSCLs. Methods Fourteen eyes of 7 subjects with keratoconus were enrolled and a wfgSCL was designed for each eye. High-contrast visual acuity and visual quality metrics were used to assess the on-eye performance of the lenses. Results The wfgSCL provided statistically lower levels of both lower-order RMS (p < 0.001) and higher-order RMS (p < 0.02) than an intermediate spherical equivalent scleral contact lens. The wfgSCL provided lower levels of lower-order RMS than a normal group of well-corrected observers (p < < 0.001). However, the wfgSCL does not provide less higher-order RMS than the normal group (p = 0.41). Of the 14 eyes studied, 10 successfully reached the exit criteria, achieving residual higher-order root mean square wavefront error (HORMS) less than or within 1 SD of the levels experienced by normal, age-matched subjects. In addition, measures of visual image quality (logVSX, logNS and logLIB) for the 10 eyes were well distributed within the range of values seen in normal eyes. However, visual performance as measured by high contrast acuity did not reach normal, age-matched levels, which is in agreement with prior results associated with the acute application of wavefront correction to KC eyes. Conclusions Wavefront-guided scleral contact lenses are capable of optically compensating for the deleterious effects of higher-order aberration concomitant with the disease, and can provide visual image quality equivalent to that seen in normal eyes. Longer duration studies are needed to assess whether the visual system of the highly aberrated eye wearing a wfgSCL is capable of producing visual performance levels typical of the normal population. PMID:24830371
Multicultural Arts: An Infusion.
ERIC Educational Resources Information Center
Wilderberger, Elizabeth
1991-01-01
Presents two examples from 1990 curriculum guide written for Pullen School. Designed for middle school students, "The Japanese Gardener as Visual Artist" emphasizes nature in aesthetic depictions including architecture, horticulture, and visual arts. Appropriate for primary grades, "Reading/Language Arts: Using Books from the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stattaus, Joerg, E-mail: joerg.stattaus@uni-due.de; Kuehl, Hilmar; Ladd, Susanne
2007-09-15
Purpose. Our study aimed to determine the visibility of small liver lesions during CT-guided biopsy and to assess the influence of lesion visibility on biopsy results. Material and Methods. Fifty patients underwent CT-guided core biopsy of small focal liver lesions (maximum diameter, 3 cm); 38 biopsies were performed using noncontrast CT, and the remaining 12 were contrast-enhanced. Visibility of all lesions was graded on a 4-point-scale (0 = not visible, 1 = poorly visible, 2 = sufficiently visible, 3 = excellently visible) before and during biopsy (with the needle placed adjacent to and within the target lesion). Results. Forty-three biopsiesmore » (86%) yielded diagnostic results, and seven biopsies were false-negative. In noncontrast biopsies, the rate of insufficiently visualized lesions (grades 0-1) increased significantly during the procedure, from 10.5% to 44.7%, due to needle artifacts. This resulted in more (17.6%) false-negative biopsy results compared to lesions with good visualization (4.8%), although this difference lacks statistical significance. Visualization impairment appeared more often with an intercostal or subcostal vs. an epigastric access and with a subcapsular vs. a central lesion location, respectively. With contrast-enhanced biopsy the visibility of hepatic lesions was only temporarily improved, with a risk of complete obscuration in the late phase. Conclusion. In conclusion, visibility of small liver lesions diminished significantly during CT-guided biopsy due to needle artifacts, with a fourfold increased rate of insufficiently visualized lesions and of false-negative histological results. Contrast enhancement did not reveal better results.« less
Guiding the mind's eye: improving communication and vision by external control of the scanpath
NASA Astrophysics Data System (ADS)
Barth, Erhardt; Dorr, Michael; Böhme, Martin; Gegenfurtner, Karl; Martinetz, Thomas
2006-02-01
Larry Stark has emphasised that what we visually perceive is very much determined by the scanpath, i.e. the pattern of eye movements. Inspired by his view, we have studied the implications of the scanpath for visual communication and came up with the idea to not only sense and analyse eye movements, but also guide them by using a special kind of gaze-contingent information display. Our goal is to integrate gaze into visual communication systems by measuring and guiding eye movements. For guidance, we first predict a set of about 10 salient locations. We then change the probability for one of these candidates to be attended: for one candidate the probability is increased, for the others it is decreased. To increase saliency, for example, we add red dots that are displayed very briefly such that they are hardly perceived consciously. To decrease the probability, for example, we locally reduce the temporal frequency content. Again, if performed in a gaze-contingent fashion with low latencies, these manipulations remain unnoticed. Overall, the goal is to find the real-time video transformation minimising the difference between the actual and the desired scanpath without being obtrusive. Applications are in the area of vision-based communication (better control of what information is conveyed) and augmented vision and learning (guide a person's gaze by the gaze of an expert or a computer-vision system). We believe that our research is very much in the spirit of Larry Stark's views on visual perception and the close link between vision research and engineering.
Fiore, Vincenzo G; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank
2017-01-01
The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation.
Fiore, Vincenzo G.; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank
2017-01-01
The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation. PMID:28824390
Structural and functional changes across the visual cortex of a patient with visual form agnosia.
Bridge, Holly; Thomas, Owen M; Minini, Loredana; Cavina-Pratesi, Cristiana; Milner, A David; Parker, Andrew J
2013-07-31
Loss of shape recognition in visual-form agnosia occurs without equivalent losses in the use of vision to guide actions, providing support for the hypothesis of two visual systems (for "perception" and "action"). The human individual DF received a toxic exposure to carbon monoxide some years ago, which resulted in a persisting visual-form agnosia that has been extensively characterized at the behavioral level. We conducted a detailed high-resolution MRI study of DF's cortex, combining structural and functional measurements. We present the first accurate quantification of the changes in thickness across DF's occipital cortex, finding the most substantial loss in the lateral occipital cortex (LOC). There are reduced white matter connections between LOC and other areas. Functional measures show pockets of activity that survive within structurally damaged areas. The topographic mapping of visual areas showed that ordered retinotopic maps were evident for DF in the ventral portions of visual cortical areas V1, V2, V3, and hV4. Although V1 shows evidence of topographic order in its dorsal portion, such maps could not be found in the dorsal parts of V2 and V3. We conclude that it is not possible to understand fully the deficits in object perception in visual-form agnosia without the exploitation of both structural and functional measurements. Our results also highlight for DF the cortical routes through which visual information is able to pass to support her well-documented abilities to use visual information to guide actions.
Visually based path-planning by Japanese monkeys.
Mushiake, H; Saito, N; Sakamoto, K; Sato, Y; Tanji, J
2001-03-01
To construct an animal model of strategy formation, we designed a maze path-finding task. First, we asked monkeys to capture a goal in the maze by moving a cursor on the screen. Cursor movement was linked to movements of each wrist. When the animals learned the association between cursor movement and wrist movement, we established a start and a goal in the maze, and asked them to find a path between them. We found that the animals took the shortest pathway, rather than approaching the goal randomly. We further found that the animals adopted a strategy of selecting a fixed intermediate point in the visually presented maze to select one of the shortest pathways, suggesting a visually based path planning. To examine their capacity to use that strategy flexibly, we transformed the task by blocking pathways in the maze, providing a problem to solve. The animals then developed a strategy of solving the problem by planning a novel shortest path from the start to the goal and rerouting the path to bypass the obstacle.
ERIC Educational Resources Information Center
Ono, Fuminori; Jiang, Yuhong; Kawahara, Jun-ichiro
2005-01-01
Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase,…
How a Visual Language of Abstract Shapes Facilitates Cultural and International Border Crossings
ERIC Educational Resources Information Center
Conroy, Arthur Thomas, III
2016-01-01
This article describes a visual language comprised of abstract shapes that has been shown to be effective in communicating prior knowledge between and within members of a small team or group. The visual language includes a set of geometric shapes and rules that guide the construction of the abstract diagrams that are the external representation of…
Effects of shade tab arrangement on the repeatability and accuracy of shade selection.
Yılmaz, Burak; Yuzugullu, Bulem; Cınar, Duygu; Berksun, Semih
2011-06-01
Appropriate and repeatable shade matching using visual shade selection remains a challenge for the restorative dentist. The purpose of this study was to evaluate the effect of different arrangements of a shade guide on the repeatability and accuracy of visual shade selection by restorative dentists. Three Vitapan Classical shade guides were used for shade selection. Seven shade tabs from one shade guide were used as target shades for the testing (A1, A4, B2, B3, C2, C4, and D3); the other 2 guides were used for shade selection by the subjects. One shade guide was arranged according to hue and chroma and the second was arranged according to value. Thirteen male and 22 female restorative dentists were asked to match the target shades using shade guide tabs arranged in the 2 different orders. The sessions were performed twice with each guide in a viewing booth. Collected data were analyzed with Fisher's exact test to compare the accuracy and repeatability of the shade selection (α=.05). There were no significant differences observed in the accuracy or repeatability of the shade selection results obtained with the 2 different arrangements. When the hue/chroma-ordered shade guide was used, 58% of the shade selections were accurate. This ratio was 57.6% when the value-ordered shade guide was used. The observers repeated 55.5% of the selections accurately with the hue/chroma-ordered shade guide and 54.3% with the value-ordered shade guide. The accuracy and repeatability of shade selections by restorative dentists were similar when different arrangements (hue/chroma-ordered and value-ordered) of the Vitapan Classical shade guide were used. Copyright © 2011 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Filling gaps in visual motion for target capture
Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637
Filling gaps in visual motion for target capture.
Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.
An Active System for Visually-Guided Reaching in 3D across Binocular Fixations
2014-01-01
Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295
Lee, Kyoung-Min; Ahn, Kyung-Ha; Keller, Edward L.
2012-01-01
The frontal eye fields (FEF), originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area’s role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task), and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task). Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area’s functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory. PMID:22761923
Lee, Kyoung-Min; Ahn, Kyung-Ha; Keller, Edward L
2012-01-01
The frontal eye fields (FEF), originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area's role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task), and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task). Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area's functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory.
DVV: a taxonomy for mixed reality visualization in image guided surgery.
Kersten-Oertel, Marta; Jannin, Pierre; Collins, D Louis
2012-02-01
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Matasci, Naim
2011-03-01
The explosion of online scientific data from experiments, simulations, and observations has given rise to an avalanche of algorithmic, visualization and imaging methods. There has also been enormous growth in the introduction of tools that provide interactive interfaces for exploring these data dynamically. Most systems, however, do not support the realtime exploration of patterns and relationships across tools and do not provide guidance on which colors, colormaps or visual metaphors will be most effective. In this paper, we introduce a general architecture for sharing metadata between applications and a "Metadata Mapper" component that allows the analyst to decide how metadata from one component should be represented in another, guided by perceptual rules. This system is designed to support "brushing [1]," in which highlighting a region of interest in one application automatically highlights corresponding values in another, allowing the scientist to develop insights from multiple sources. Our work builds on the component-based iPlant Cyberinfrastructure [2] and provides a general approach to supporting interactive, exploration across independent visualization and visual analysis components.
Handbook for Teachers of the Visually Handicapped.
ERIC Educational Resources Information Center
Napier, Grace D.; Weishahn, Mel W.
Designed to aid the inexperienced teacher of the visually handicapped, the handbook examines aspects of program objectives, content, philosophy, methods, eligibility, and placement procedures. The guide to material selection provides specific information on the acquisition of Braille materials, large type materials, recorded materials, direct…
Pilot/vehicle model analysis of visually guided flight
NASA Technical Reports Server (NTRS)
Zacharias, Greg L.
1991-01-01
Information is given in graphical and outline form on a pilot/vehicle model description, control of altitude with simple terrain clues, simulated flight with visual scene delays, model-based in-cockpit display design, and some thoughts on the role of pilot/vehicle modeling.
Sequencing Stories in Spanish and English.
ERIC Educational Resources Information Center
Steckbeck, Pamela Meza
The guide was designed for speech pathologists, bilingual teachers, and specialists in English as a second language who work with Spanish-speaking children. The guide contains twenty illustrated stories that facilitate the learning of auditory sequencing, auditory and visual memory, receptive and expressive vocabulary, and expressive language…
Advanced Texas Studies: Curriculum Guide.
ERIC Educational Resources Information Center
Harlandale Independent School District, San Antonio, TX. Career Education Center.
The guide is arranged in vertical columns relating curriculum concepts in Texas studies to curriculum performance objectives, career concepts and career performance objectives, suggested teaching methods, and audio-visual and resource materials. Career information is included on 24 related occupations. Space is provided for teachers' notes which…
The Benefit of Positive Visualization on the U.S. Army
2014-06-13
calm, guided imagery allows individuals to envision what it would be like to be in an ideally peaceful, serene , and comforting scene. Typically...ideally peaceful, serene , and comforting scene. Typically, guided imagery is conducted by a qualified mental health specialist hence the term
Fluorescence-guided surgical resection of oral cancer reduces recurrence
NASA Astrophysics Data System (ADS)
Lane, Pierre; Poh, Catherine F.; Durham, J. Scott; Zhang, Lewei; Lam, Sylvia F.; Rosin, Miriam; MacAulay, Calum
2011-03-01
Approximately 36,000 people in the US will be newly diagnosed with oral cancer in 2010 and it will cause 8,000 new deaths. The death rate is unacceptably high because oral cancer is usually discovered late in its development and is often difficult to treat or remove completely. Data collected over the last 5 years at the BC Cancer Agency suggest that the surgical resection of oral lesions guided by the visualization of the alteration of endogenous tissue fluorescence can dramatically reduce the rate of cancer recurrence. Four years into a study which compares conventional versus fluorescence-guided surgical resection, we reported a recurrence rate of 25% (7 of 28 patients) for the control group compared to a recurrence rate of 0% (none of the 32 patients) for the fluorescence-guided group. Here we present resent results from this ongoing study in which patients undergo either conventional surgical resection of oral cancer under white light illumination or using tools that enable the visualization of naturally occurring tissue fluorescence.
Ivarsen, Anders; Hjortdal, Jesper Ø
2014-06-01
To report the outcome of topography-guided photorefractive keratectomy (PRK) after complicated small incision lenticule extraction (SMILE). Retrospective case series of 5 eyes with irregular topography and ghost images after complicated SMILE. All eyes received transepithelial topography-guided PRK. Two eyes were treated with 0.02% mitomycin C. Patients were examined after a minimum of 3 months with evaluation of uncorrected (UDVA) and corrected (CDVA) distance visual acuity, Pentacam tomography (Oculus Optikgeräte, Wetzlar, Germany), and whole-eye aberrometry. In 3 eyes, subjective symptoms were diminished and UDVA, CDVA, topography, and corneal wavefront aberrations were improved. The remaining 2 eyes developed significant haze with worsened topography and wavefront aberrations. One eye experienced a two-line reduction in CDVA. Eyes with haze development had not been treated with mitomycin C. Transepithelial topography-guided PRK may reduce visual symptoms after complicated SMILE if postoperative haze can be controlled. To reduce the risk of haze development, application of mitomycin C may be considered. Copyright 2014, SLACK Incorporated.
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
NASA Astrophysics Data System (ADS)
Unachukwu, Uchenna John; Warren, Alice; Li, Ze; Mishra, Shawn; Zhou, Jing; Sauane, Moira; Lim, Hyungsik; Vazquez, Maribel; Redenti, Stephen
2016-03-01
To replace photoreceptors lost to disease or trauma and restore vision, laboratories around the world are investigating photoreceptor replacement strategies using subretinal transplantation of photoreceptor precursor cells (PPCs) and retinal progenitor cells (RPCs). Significant obstacles to advancement of photoreceptor cell-replacement include low migration rates of transplanted cells into host retina and an absence of data describing chemotactic signaling guiding migration of transplanted cells in the damaged retinal microenvironment. To elucidate chemotactic signaling guiding transplanted cell migration, bioinformatics modeling of PPC transplantation into light-damaged retina was performed. The bioinformatics modeling analyzed whole-genome expression data and matched PPC chemotactic cell-surface receptors to cognate ligands expressed in the light-damaged retinal microenvironment. A library of significantly predicted chemotactic ligand-receptor pairs, as well as downstream signaling networks was generated. PPC and RPC migration in microfluidic ligand gradients were analyzed using a highly predicted ligand-receptor pair, SDF-1α - CXCR4, and both PPCs and RPCs exhibited significant chemotaxis. This work present a systems level model and begins to elucidate molecular mechanisms involved in PPC and RPC migration within the damaged retinal microenvironment.
Environmental Constraints Guide Migration of Malaria Parasites during Transmission
Hellmann, Janina Kristin; Münter, Sylvia; Kudryashev, Mikhail; Schulz, Simon; Heiss, Kirsten; Müller, Ann-Kristin; Matuschewski, Kai; Spatz, Joachim P.; Schwarz, Ulrich S.; Frischknecht, Friedrich
2011-01-01
Migrating cells are guided in complex environments mainly by chemotaxis or structural cues presented by the surrounding tissue. During transmission of malaria, parasite motility in the skin is important for Plasmodium sporozoites to reach the blood circulation. Here we show that sporozoite migration varies in different skin environments the parasite encounters at the arbitrary sites of the mosquito bite. In order to systematically examine how sporozoite migration depends on the structure of the environment, we studied it in micro-fabricated obstacle arrays. The trajectories observed in vivo and in vitro closely resemble each other suggesting that structural constraints can be sufficient to guide Plasmodium sporozoites in complex environments. Sporozoite speed in different environments is optimized for migration and correlates with persistence length and dispersal. However, this correlation breaks down in mutant sporozoites that show adhesion impairment due to the lack of TRAP-like protein (TLP) on their surfaces. This may explain their delay in infecting the host. The flexibility of sporozoite adaption to different environments and a favorable speed for optimal dispersal ensures efficient host switching during malaria transmission. PMID:21698220
Separate visual representations for perception and for visually guided behavior
NASA Technical Reports Server (NTRS)
Bridgeman, Bruce
1989-01-01
Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.
Ivanov, Iliya V; Mackeben, Manfred; Vollmer, Annika; Martus, Peter; Nguyen, Nhung X; Trauzettel-Klosinski, Susanne
2016-01-01
Degenerative retinal diseases, especially retinitis pigmentosa (RP), lead to severe peripheral visual field loss (tunnel vision), which impairs mobility. The lack of peripheral information leads to fewer horizontal eye movements and, thus, diminished scanning in RP patients in a natural environment walking task. This randomized controlled study aimed to improve mobility and the dynamic visual field by applying a compensatory Exploratory Saccadic Training (EST). Oculomotor responses during walking and avoiding obstacles in a controlled environment were studied before and after saccade or reading training in 25 RP patients. Eye movements were recorded using a mobile infrared eye tracker (Tobii glasses) that measured a range of spatial and temporal variables. Patients were randomly assigned to two training conditions: Saccade (experimental) and reading (control) training. All subjects who first performed reading training underwent experimental training later (waiting list control group). To assess the effect of training on subjects, we measured performance in the training task and the following outcome variables related to daily life: Response Time (RT) during exploratory saccade training, Percent Preferred Walking Speed (PPWS), the number of collisions with obstacles, eye position variability, fixation duration, and the total number of fixations including the ones in the subjects' blind area of the visual field. In the saccade training group, RTs on average decreased, while the PPWS significantly increased. The improvement persisted, as tested 6 weeks after the end of the training. On average, the eye movement range of RP patients before and after training was similar to that of healthy observers. In both, the experimental and reading training groups, we found many fixations outside the subjects' seeing visual field before and after training. The average fixation duration was significantly shorter after the training, but only in the experimental training condition. We conclude that the exploratory saccade training was beneficial for RP patients and resulted in shorter fixation durations after the training. We also found a significant improvement in relative walking speed during navigation in a real-world like controlled environment.
Ivanov, Iliya V.; Mackeben, Manfred; Vollmer, Annika; Martus, Peter; Nguyen, Nhung X.; Trauzettel-Klosinski, Susanne
2016-01-01
Purpose Degenerative retinal diseases, especially retinitis pigmentosa (RP), lead to severe peripheral visual field loss (tunnel vision), which impairs mobility. The lack of peripheral information leads to fewer horizontal eye movements and, thus, diminished scanning in RP patients in a natural environment walking task. This randomized controlled study aimed to improve mobility and the dynamic visual field by applying a compensatory Exploratory Saccadic Training (EST). Methods Oculomotor responses during walking and avoiding obstacles in a controlled environment were studied before and after saccade or reading training in 25 RP patients. Eye movements were recorded using a mobile infrared eye tracker (Tobii glasses) that measured a range of spatial and temporal variables. Patients were randomly assigned to two training conditions: Saccade (experimental) and reading (control) training. All subjects who first performed reading training underwent experimental training later (waiting list control group). To assess the effect of training on subjects, we measured performance in the training task and the following outcome variables related to daily life: Response Time (RT) during exploratory saccade training, Percent Preferred Walking Speed (PPWS), the number of collisions with obstacles, eye position variability, fixation duration, and the total number of fixations including the ones in the subjects' blind area of the visual field. Results In the saccade training group, RTs on average decreased, while the PPWS significantly increased. The improvement persisted, as tested 6 weeks after the end of the training. On average, the eye movement range of RP patients before and after training was similar to that of healthy observers. In both, the experimental and reading training groups, we found many fixations outside the subjects' seeing visual field before and after training. The average fixation duration was significantly shorter after the training, but only in the experimental training condition. Conclusions We conclude that the exploratory saccade training was beneficial for RP patients and resulted in shorter fixation durations after the training. We also found a significant improvement in relative walking speed during navigation in a real-world like controlled environment. PMID:27351629
Feast for the Eyes: An Introduction to Data Visualization.
Brigham, Tara J
2016-01-01
Data visualization is defined as the use of data presented in a graphical or pictorial manner. While data visualization is not a new concept, the ease with which anyone can create a data-drive chart, image, or visual has encouraged its growth. The increase of free sources of data and need for user-created content on social media has also led to a rise in data visualization's popularity. This column will explore what data visualization is and how it is currently being used. It will also discuss the benefits, potential problems, and uses in libraries. A brief list of visualization guides is included.
Luani, Blerim; Zrenner, Bernhard; Basho, Maksim; Genz, Conrad; Rauwolf, Thomas; Tanev, Ivan; Schmeisser, Alexander; Braun-Dullaeus, Rüdiger C
2018-01-01
Stochastic damage of the ionizing radiation to both patients and medical staff is a drawback of fluoroscopic guidance during catheter ablation of cardiac arrhythmias. Therefore, emerging zero-fluoroscopy catheter-guidance techniques are of great interest. We investigated, in a prospective pilot study, the feasibility and safety of the cryothermal (CA) slow-pathway ablation in patients with symptomatic atrioventricular-nodal-re-entry-tachycardia (AVNRT) using solely intracardiac echocardiography (ICE) for endovascular and endocardial catheter visualization. Twenty-five consecutive patients (mean age 55.6 ± 12.0 years, 17 female) with ECG-documentation or symptoms suggesting AVNRT underwent an electrophysiology study (EPS) in our laboratory utilizing ICE for catheter navigation. Supraventricular tachycardia was inducible in 23 (92%) patients; AVNRT was confirmed by appropriate stimulation maneuvers in 20 (80%) patients. All EPS in the AVNRT subgroup could be accomplished without need for fluoroscopy, relying solely on ICE-guidance. CA guided by anatomical location and slow-pathway potentials was successful in all patients, median cryo-mappings = 6 (IQR:3-10), median cryo-ablations = 2 (IQR:1-3). Fluoroscopy was used to facilitate the trans-septal puncture and localization of the ablation substrate in the remaining 3 patients (one focal atrial tachycardia and two atrioventricular-re-entry-tachycardias). Mean EPS duration in the AVNRT subgroup was 99.8 ± 39.6 minutes, ICE guided catheter placement 11.9 ± 5.8 minutes, time needed for diagnostic evaluation 27.1 ± 10.8 minutes, and cryo-application duration 26.3 ± 30.8 minutes. ICE-guided zero-fluoroscopy CA in AVNRT patients is feasible and safe. Real-time visualization of the true endovascular borders and cardiac structures allow for safe catheter navigation during the ICE-guided EPS and might be an alternative to visualization technologies using geometry reconstructions. © 2017 Wiley Periodicals, Inc.
Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.
2017-01-01
Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.
Development of a mobile robot for the 1995 AUVS competition
NASA Astrophysics Data System (ADS)
Matthews, Bradley O.; Ruthemeyer, Michael A.; Perdue, David; Hall, Ernest L.
1995-12-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors systems. The speed and steering control are supervised by a 486 computer through a 3-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. The is micro-controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system, where even computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected through a commercial tracking device, communicating with the computer the X,Y coordinates of the lane marker. Testing of these systems yielded positive results by showing that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous controller applicable for any mobile vehicle with only minor adaptations.
Guidance of visual search by memory and knowledge.
Hollingworth, Andrew
2012-01-01
To behave intelligently in the world, humans must be able to find objects efficiently within the complex environments they inhabit. A growing proportion of the literature on visual search is devoted to understanding this type of natural search. In the present chapter, I review the literature on visual search through natural scenes, focusing on the role of memory and knowledge in guiding attention to task-relevant objects.