A 2D virtual reality system for visual goal-driven navigation in zebrafish larvae
Jouary, Adrien; Haudrechy, Mathieu; Candelier, Raphaël; Sumbre, German
2016-01-01
Animals continuously rely on sensory feedback to adjust motor commands. In order to study the role of visual feedback in goal-driven navigation, we developed a 2D visual virtual reality system for zebrafish larvae. The visual feedback can be set to be similar to what the animal experiences in natural conditions. Alternatively, modification of the visual feedback can be used to study how the brain adapts to perturbations. For this purpose, we first generated a library of free-swimming behaviors from which we learned the relationship between the trajectory of the larva and the shape of its tail. Then, we used this technique to infer the intended displacements of head-fixed larvae, and updated the visual environment accordingly. Under these conditions, larvae were capable of aligning and swimming in the direction of a whole-field moving stimulus and produced the fine changes in orientation and position required to capture virtual prey. We demonstrate the sensitivity of larvae to visual feedback by updating the visual world in real-time or only at the end of the discrete swimming episodes. This visual feedback perturbation caused impaired performance of prey-capture behavior, suggesting that larvae rely on continuous visual feedback during swimming. PMID:27659496
Hamlet, Sean M; Haggerty, Christopher M; Suever, Jonathan D; Wehner, Gregory J; Andres, Kristin N; Powell, David K; Zhong, Xiaodong; Fornwalt, Brandon K
2017-03-01
To determine the optimal respiratory navigator gating configuration for the quantification of left ventricular strain using spiral cine displacement encoding with stimulated echoes (DENSE) MRI. Two-dimensional spiral cine DENSE was performed on a 3 Tesla MRI using two single-navigator configurations (retrospective, prospective) and a combined "dual-navigator" configuration in 10 healthy adults and 20 healthy children. The adults also underwent breathhold DENSE as a reference standard for comparisons. Peak left ventricular strains, signal-to-noise ratio (SNR), and navigator efficiency were compared. Subjects also underwent dual-navigator gating with and without visual feedback to determine the effect on navigator efficiency. There were no differences in circumferential, radial, and longitudinal strains between navigator-gated and breathhold DENSE (P = 0.09-0.95) (as confidence intervals, retrospective: [-1.0%-1.1%], [-7.4%-2.0%], [-1.0%-1.2%]; prospective: [-0.6%-2.7%], [-2.8%-8.3%], [-0.3%-2.9%]; dual: [-1.6%-0.5%], [-8.3%-3.2%], [-0.8%-1.9%], respectively). The dual configuration maintained SNR compared with breathhold acquisitions (16 versus 18, P = 0.06). SNR for the prospective configuration was lower than for the dual navigator in adults (P = 0.004) and children (P < 0.001). Navigator efficiency was higher (P < 0.001) for both retrospective (54%) and prospective (56%) configurations compared with the dual configuration (35%). Visual feedback improved the dual configuration navigator efficiency to 55% (P < 0.001). When quantifying left ventricular strains using spiral cine DENSE MRI, a dual navigator configuration results in the highest SNR in adults and children. In adults, a retrospective configuration has good navigator efficiency without a substantial drop in SNR. Prospective gating should be avoided because it has the lowest SNR. Visual feedback represents an effective option to maintain navigator efficiency while using a dual navigator configuration. 2 J. Magn. Reson. Imaging 2017;45:786-794. © 2016 International Society for Magnetic Resonance in Medicine.
Vibrotactile Feedbacks System for Assisting the Physically Impaired Persons for Easy Navigation
NASA Astrophysics Data System (ADS)
Safa, M.; Geetha, G.; Elakkiya, U.; Saranya, D.
2018-04-01
NAYAN architecture is for a visually impaired person to help for navigation. As well known, all visually impaired people desperately requires special requirements even to access services like the public transportation. This prototype system is a portable device; it is so easy to carry in any conduction to travel through a familiar and unfamiliar environment. The system consists of GPS receiver and it can get NEMA data through the satellite and it is provided to user's Smartphone through Arduino board. This application uses two vibrotactile feedbacks that will be placed in the left and right shoulder for vibration feedback, which gives information about the current location. The ultrasonic sensor is used for obstacle detection which is found in front of the visually impaired person. The Bluetooth modules connected with Arduino board is to send information to the user's mobile phone which it receives from GPS.
Hamlet, Sean M.; Haggerty, Christopher M.; Suever, Jonathan D.; Wehner, Gregory J.; Andres, Kristin N.; Powell, David K.; Fornwalt, Brandon K.
2016-01-01
Purpose To determine the optimal respiratory navigator gating configuration for the quantification of left ventricular strain using spiral cine displacement encoding with stimulated echoes (DENSE) MRI. Materials and Methods 2D spiral cine DENSE was performed on a 3T MRI using two single-navigator configurations (retrospective, prospective), and a combined “dual-navigator” configuration in 10 healthy adults and 20 healthy children. The adults also underwent breath-hold DENSE as a reference standard for comparisons. Peak left ventricular strains, signal-to-noise ratio (SNR) and navigator efficiency were compared. Subjects also underwent dual-navigator gating with and without visual feedback to determine the effect on navigator efficiency. Results There were no differences in circumferential, radial and longitudinal strains between navigator-gated and breath-hold DENSE (p=0.09–0.95) (as confidence intervals, retrospective: [−1.0%,1.1%],[−7.4%,2.0%],[−1.0%,1.2%]; prospective: [−0.6%,2.7%],[−2.8%,8.3%],[−0.3%,2.9%]; dual: [−1.6%,0.5%],[−8.3%,3.2%],[−0.8%,1.9%], respectively). The dual configuration maintained SNR compared to breath-hold acquisitions (16 vs. 18, p=0.06). SNR for the prospective configuration was lower than for the dual navigator in adults (p=0.004) and children (p<0.001). Navigator efficiency was higher (p<0.001) for both retrospective (54%) and prospective (56%) configurations compared to the dual configuration (35%). Visual feedback improved the dual configuration navigator efficiency to 55% (p<0.001). Conclusion When quantifying left ventricular strains using spiral cine DENSE MRI, a dual navigator configuration results in the highest SNR in adults and children. In adults, a retrospective configuration has good navigator efficiency without a substantial drop in SNR. Prospective gating should be avoided since it has the lowest SNR. Visual feedback represents an effective option to maintain navigator efficiency while using a dual navigator configuration. PMID:27458823
Assessment of feedback modalities for wearable visual aids in blind mobility
Sorrentino, Paige; Bohlool, Shadi; Zhang, Carey; Arditti, Mort; Goodrich, Gregory; Weiland, James D.
2017-01-01
Sensory substitution devices engage sensory modalities other than vision to communicate information typically obtained through the sense of sight. In this paper, we examine the ability of subjects who are blind to follow simple verbal and vibrotactile commands that allow them to navigate a complex path. A total of eleven visually impaired subjects were enrolled in the study. Prototype systems were developed to deliver verbal and vibrotactile commands to allow an investigator to guide a subject through a course. Using this mode, subjects could follow commands easily and navigate significantly faster than with their cane alone (p <0.05). The feedback modes were similar with respect to the increased speed for course completion. Subjects rated usability of the feedback systems as “above average” with scores of 76.3 and 90.9 on the system usability scale. PMID:28182731
Aging and Sensory Substitution in a Virtual Navigation Task.
Levy-Tzedek, S; Maidenbaum, S; Amedi, A; Lackner, J
2016-01-01
Virtual environments are becoming ubiquitous, and used in a variety of contexts-from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.
Visual landmarks facilitate rodent spatial navigation in virtual reality environments
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484
Teleoperation of steerable flexible needles by combining kinesthetic and vibratory feedback.
Pacchierotti, Claudio; Abayazid, Momen; Misra, Sarthak; Prattichizzo, Domenico
2014-01-01
Needle insertion in soft-tissue is a minimally invasive surgical procedure that demands high accuracy. In this respect, robotic systems with autonomous control algorithms have been exploited as the main tool to achieve high accuracy and reliability. However, for reasons of safety and responsibility, autonomous robotic control is often not desirable. Therefore, it is necessary to focus also on techniques enabling clinicians to directly control the motion of the surgical tools. In this work, we address that challenge and present a novel teleoperated robotic system able to steer flexible needles. The proposed system tracks the position of the needle using an ultrasound imaging system and computes needle's ideal position and orientation to reach a given target. The master haptic interface then provides the clinician with mixed kinesthetic-vibratory navigation cues to guide the needle toward the computed ideal position and orientation. Twenty participants carried out an experiment of teleoperated needle insertion into a soft-tissue phantom, considering four different experimental conditions. Participants were provided with either mixed kinesthetic-vibratory feedback or mixed kinesthetic-visual feedback. Moreover, we considered two different ways of computing ideal position and orientation of the needle: with or without set-points. Vibratory feedback was found more effective than visual feedback in conveying navigation cues, with a mean targeting error of 0.72 mm when using set-points, and of 1.10 mm without set-points.
Hamlet, Sean M; Haggerty, Christopher M; Suever, Jonathan D; Wehner, Gregory J; Grabau, Jonathan D; Andres, Kristin N; Vandsburger, Moriel H; Powell, David K; Sorrell, Vincent L; Fornwalt, Brandon K
2016-09-06
Advanced cardiovascular magnetic resonance (CMR) acquisitions often require long scan durations that necessitate respiratory navigator gating. The tradeoff of navigator gating is reduced scan efficiency, particularly when the patient's breathing patterns are inconsistent, as is commonly seen in children. We hypothesized that engaging pediatric participants with a navigator-controlled videogame to help control breathing patterns would improve navigator efficiency and maintain image quality. We developed custom software that processed the Siemens respiratory navigator image in real-time during CMR and represented diaphragm position using a cartoon avatar, which was projected to the participant in the scanner as visual feedback. The game incentivized children to breathe such that the avatar was positioned within the navigator acceptance window (±3 mm) throughout image acquisition. Using a 3T Siemens Tim Trio, 50 children (Age: 14 ± 3 years, 48 % female) with no significant past medical history underwent a respiratory navigator-gated 2D spiral cine displacement encoding with stimulated echoes (DENSE) CMR acquisition first with no feedback (NF) and then with the feedback game (FG). Thirty of the 50 children were randomized to undergo extensive off-scanner training with the FG using a MRI simulator, or no off-scanner training. Navigator efficiency, signal-to-noise ratio (SNR), and global left-ventricular strains were determined for each participant and compared. Using the FG improved average navigator efficiency from 33 ± 15 to 58 ± 13 % (p < 0.001) and improved SNR by 5 % (p = 0.01) compared to acquisitions with NF. There was no difference in navigator efficiency (p = 0.90) or SNR (p = 0.77) between untrained and trained participants for FG acquisitions. Circumferential and radial strains derived from FG acquisitions were slightly reduced compared to NF acquisitions (-16 ± 2 % vs -17 ± 2 %, p < 0.001; 40 ± 10 % vs 44 ± 11 %, p = 0.005, respectively). There were no differences in longitudinal strain (p = 0.38). Use of a respiratory navigator feedback game during navigator-gated CMR improved navigator efficiency in children from 33 to 58 %. This improved efficiency was associated with a 5 % increase in SNR for spiral cine DENSE. Extensive off-scanner training was not required to achieve the improvement in navigator efficiency.
Brighton, Caroline H.; Thomas, Adrian L. R.
2017-01-01
The ability to intercept uncooperative targets is key to many diverse flight behaviors, from courtship to predation. Previous research has looked for simple geometric rules describing the attack trajectories of animals, but the underlying feedback laws have remained obscure. Here, we use GPS loggers and onboard video cameras to study peregrine falcons, Falco peregrinus, attacking stationary targets, maneuvering targets, and live prey. We show that the terminal attack trajectories of peregrines are not described by any simple geometric rule as previously claimed, and instead use system identification techniques to fit a phenomenological model of the dynamical system generating the observed trajectories. We find that these trajectories are best—and exceedingly well—modeled by the proportional navigation (PN) guidance law used by most guided missiles. Under this guidance law, turning is commanded at a rate proportional to the angular rate of the line-of-sight between the attacker and its target, with a constant of proportionality (i.e., feedback gain) called the navigation constant (N). Whereas most guided missiles use navigation constants falling on the interval 3 ≤ N ≤ 5, peregrine attack trajectories are best fitted by lower navigation constants (median N < 3). This lower feedback gain is appropriate at the lower flight speed of a biological system, given its presumably higher error and longer delay. This same guidance law could find use in small visually guided drones designed to remove other drones from protected airspace. PMID:29203660
Brighton, Caroline H; Thomas, Adrian L R; Taylor, Graham K
2017-12-19
The ability to intercept uncooperative targets is key to many diverse flight behaviors, from courtship to predation. Previous research has looked for simple geometric rules describing the attack trajectories of animals, but the underlying feedback laws have remained obscure. Here, we use GPS loggers and onboard video cameras to study peregrine falcons, Falco peregrinus , attacking stationary targets, maneuvering targets, and live prey. We show that the terminal attack trajectories of peregrines are not described by any simple geometric rule as previously claimed, and instead use system identification techniques to fit a phenomenological model of the dynamical system generating the observed trajectories. We find that these trajectories are best-and exceedingly well-modeled by the proportional navigation (PN) guidance law used by most guided missiles. Under this guidance law, turning is commanded at a rate proportional to the angular rate of the line-of-sight between the attacker and its target, with a constant of proportionality (i.e., feedback gain) called the navigation constant ( N ). Whereas most guided missiles use navigation constants falling on the interval 3 ≤ N ≤ 5, peregrine attack trajectories are best fitted by lower navigation constants (median N < 3). This lower feedback gain is appropriate at the lower flight speed of a biological system, given its presumably higher error and longer delay. This same guidance law could find use in small visually guided drones designed to remove other drones from protected airspace. Copyright © 2017 the Author(s). Published by PNAS.
Kim, Aram; Zhou, Zixuan; Kretch, Kari S; Finley, James M
2017-07-01
The ability to successfully navigate obstacles in our environment requires integration of visual information about the environment with estimates of our body's state. Previous studies have used partial occlusion of the visual field to explore how information about the body and impending obstacles are integrated to mediate a successful clearance strategy. However, because these manipulations often remove information about both the body and obstacle, it remains to be seen how information about the lower extremities alone is utilized during obstacle crossing. Here, we used an immersive virtual reality (VR) interface to explore how visual feedback of the lower extremities influences obstacle crossing performance. Participants wore a head-mounted display while walking on treadmill and were instructed to step over obstacles in a virtual corridor in four different feedback trials. The trials involved: (1) No visual feedback of the lower extremities, (2) an endpoint-only model, (3) a link-segment model, and (4) a volumetric multi-segment model. We found that the volumetric model improved success rate, placed their trailing foot before crossing and leading foot after crossing more consistently, and placed their leading foot closer to the obstacle after crossing compared to no model. This knowledge is critical for the design of obstacle negotiation tasks in immersive virtual environments as it may provide information about the fidelity necessary to reproduce ecologically valid practice environments.
Safe Local Navigation for Visually Impaired Users With a Time-of-Flight and Haptic Feedback Device.
Katzschmann, Robert K; Araki, Brandon; Rus, Daniela
2018-03-01
This paper presents ALVU (Array of Lidars and Vibrotactile Units), a contactless, intuitive, hands-free, and discreet wearable device that allows visually impaired users to detect low- and high-hanging obstacles, as well as physical boundaries in their immediate environment. The solution allows for safe local navigation in both confined and open spaces by enabling the user to distinguish free space from obstacles. The device presented is composed of two parts: a sensor belt and a haptic strap. The sensor belt is an array of time-of-flight distance sensors worn around the front of a user's waist, and the pulses of infrared light provide reliable and accurate measurements of the distances between the user and surrounding obstacles or surfaces. The haptic strap communicates the measured distances through an array of vibratory motors worn around the user's upper abdomen, providing haptic feedback. The linear vibration motors are combined with a point-loaded pretensioned applicator to transmit isolated vibrations to the user. We validated the device's capability in an extensive user study entailing 162 trials with 12 blind users. Users wearing the device successfully walked through hallways, avoided obstacles, and detected staircases.
Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot
Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.
2014-01-01
Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350
Open Touch/Sound Maps: A system to convey street data through haptic and auditory feedback
NASA Astrophysics Data System (ADS)
Kaklanis, Nikolaos; Votis, Konstantinos; Tzovaras, Dimitrios
2013-08-01
The use of spatial (geographic) information is becoming ever more central and pervasive in today's internet society but the most of it is currently inaccessible to visually impaired users. However, access in visual maps is severely restricted to visually impaired and people with blindness, due to their inability to interpret graphical information. Thus, alternative ways of a map's presentation have to be explored, in order to enforce the accessibility of maps. Multiple types of sensory perception like touch and hearing may work as a substitute of vision for the exploration of maps. The use of multimodal virtual environments seems to be a promising alternative for people with visual impairments. The present paper introduces a tool for automatic multimodal map generation having haptic and audio feedback using OpenStreetMap data. For a desired map area, an elevation map is being automatically generated and can be explored by touch, using a haptic device. A sonification and a text-to-speech (TTS) mechanism provide also audio navigation information during the haptic exploration of the map.
Navigators for motion detection during real-time MRI-guided radiotherapy
NASA Astrophysics Data System (ADS)
Stam, Mette K.; Crijns, Sjoerd P. M.; Zonnenberg, Bernard A.; Barendrecht, Maurits M.; van Vulpen, Marco; Lagendijk, Jan J. W.; Raaymakers, Bas W.
2012-11-01
An MRI-linac system provides direct MRI feedback and with that the possibility of adapting radiation treatments to the actual tumour position. This paper addresses the use of fast 1D MRI, pencil-beam navigators, for this feedback. The accuracy of using navigators was determined on a moving phantom. The possibility of organ tracking and breath-hold monitoring based on navigator guidance was shown for the kidney. Navigators are accurate within 0.5 mm and the analysis has a minimal time lag smaller than 30 ms as shown for the phantom measurements. The correlation of 2D kidney images and navigators shows the possibility of complete organ tracking. Furthermore the breath-hold monitoring of the kidney is accurate within 1.5 mm, allowing gated radiotherapy based on navigator feedback. Navigators are a fast and precise method for monitoring and real-time tracking of anatomical landmarks. As such, they provide direct MRI feedback on anatomical changes for more precise radiation delivery.
Airflow and optic flow mediate antennal positioning in flying honeybees
Roy Khurana, Taruni; Sane, Sanjay P
2016-01-01
To maintain their speeds during navigation, insects rely on feedback from their visual and mechanosensory modalities. Although optic flow plays an essential role in speed determination, it is less reliable under conditions of low light or sparse landmarks. Under such conditions, insects rely on feedback from antennal mechanosensors but it is not clear how these inputs combine to elicit flight-related antennal behaviours. We here show that antennal movements of the honeybee, Apis mellifera, are governed by combined visual and antennal mechanosensory inputs. Frontal airflow, as experienced during forward flight, causes antennae to actively move forward as a sigmoidal function of absolute airspeed values. However, corresponding front-to-back optic flow causes antennae to move backward, as a linear function of relative optic flow, opposite the airspeed response. When combined, these inputs maintain antennal position in a state of dynamic equilibrium. DOI: http://dx.doi.org/10.7554/eLife.14449.001 PMID:27097104
Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.
Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun
2014-06-13
In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.
Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind
Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun
2014-01-01
In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind. PMID:24932864
Comparison of helmet-mounted display designs in support of wayfinding
NASA Astrophysics Data System (ADS)
Kumagai, Jason K.; Massel, Lisa; Tack, David; Bossi, Linda
2003-09-01
The Canadian Soldier Information Requirements Technology Demonstration (SIREQ TD) soldier modernization research and development program has conducted experiments to help determine the types and amount of information needed to support wayfinding across a range of terrain environments, the most effective display modality for providing the information (visual, auditory or tactile) that will minimize conflict with other infantry tasks, and to optimize interface design. In this study, seven different visual helmet-mounted display (HMD) designs were developed based on soldier feedback from previous studies. The displays and an in-service compass condition were contrasted to investigate how the visual HMD interfaces influenced navigation performance. Displays varied with respect to their information content, frame of reference, point of view, and display features. Twelve male infantry soldiers used all eight experimental conditions to locate bearings to waypoints. From a constant location, participants were required to face waypoints presented at offset bearings of 25, 65, and 120 degrees. Performance measures included time to identify waypoints, accuracy, and head misdirection errors. Subjective measures of performance included ratings of ease of use, acceptance for land navigation, and mental demand. Comments were collected to identify likes, dislikes and possible improvements required for HMDs. Results underlined the potential performance enhancement of GPS-based navigation with HMDs, the requirement for explicit directional information, the desirability of both analog and digital information, the performance benefits of an egocentric frame of reference, the merit of a forward field of view, and the desirability of a guide to help landmark. Implications for the information requirements and human factors design of HMDs for land-based navigational tasks are discussed.
Optic flow-based collision-free strategies: From insects to robots.
Serres, Julien R; Ruffier, Franck
2017-09-01
Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.
Relevance feedback-based building recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Allinson, Nigel M.
2010-07-01
Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.
Tactile Cueing as a Gravitational Substitute for Spatial Navigation During Parabolic Flight
NASA Technical Reports Server (NTRS)
Montgomery, K. L.; Beaton, K. H.; Barba, J. M.; Cackler, J. M.; Son, J. H.; Horsfield, S. P.; Wood, S. J.
2010-01-01
INTRODUCTION: Spatial navigation requires an accurate awareness of orientation in your environment. The purpose of this experiment was to examine how spatial awareness was impaired with changing gravitational cues during parabolic flight, and the extent to which vibrotactile feedback of orientation could be used to help improve performance. METHODS: Six subjects were restrained in a chair tilted relative to the plane floor, and placed at random positions during the start of the microgravity phase. Subjects reported their orientation using verbal reports, and used a hand-held controller to point to a desired target location presented using a virtual reality video mask. This task was repeated with and without constant tactile cueing of "down" direction using a belt of 8 tactors placed around the mid-torso. Control measures were obtained during ground testing using both upright and tilted conditions. RESULTS: Perceptual estimates of orientation and pointing accuracy were impaired during microgravity or during rotation about an upright axis in 1g. The amount of error was proportional to the amount of chair displacement. Perceptual errors were reduced during movement about a tilted axis on earth. CONCLUSIONS: Reduced perceptual errors during tilts in 1g indicate the importance of otolith and somatosensory cues for maintaining spatial awareness. Tactile cueing may improve navigation in operational environments or clinical populations, providing a non-visual non-auditory feedback of orientation or desired direction heading.
Ehinger, Benedikt V.; Fischer, Petra; Gert, Anna L.; Kaufhold, Lilli; Weber, Felix; Pipa, Gordon; König, Peter
2014-01-01
In everyday life, spatial navigation involving locomotion provides congruent visual, vestibular, and kinesthetic information that need to be integrated. Yet, previous studies on human brain activity during navigation focus on stationary setups, neglecting vestibular and kinesthetic feedback. The aim of our work is to uncover the influence of those sensory modalities on cortical processing. We developed a fully immersive virtual reality setup combined with high-density mobile electroencephalography (EEG). Participants traversed one leg of a triangle, turned on the spot, continued along the second leg, and finally indicated the location of their starting position. Vestibular and kinesthetic information was provided either in combination, as isolated sources of information, or not at all within a 2 × 2 full factorial intra-subjects design. EEG data were processed by clustering independent components, and time-frequency spectrograms were calculated. In parietal, occipital, and temporal clusters, we detected alpha suppression during the turning movement, which is associated with a heightened demand of visuo-attentional processing and closely resembles results reported in previous stationary studies. This decrease is present in all conditions and therefore seems to generalize to more natural settings. Yet, in incongruent conditions, when different sensory modalities did not match, the decrease is significantly stronger. Additionally, in more anterior areas we found that providing only vestibular but no kinesthetic information results in alpha increase. These observations demonstrate that stationary experiments omit important aspects of sensory feedback. Therefore, it is important to develop more natural experimental settings in order to capture a more complete picture of neural correlates of spatial navigation. PMID:24616681
Ehinger, Benedikt V; Fischer, Petra; Gert, Anna L; Kaufhold, Lilli; Weber, Felix; Pipa, Gordon; König, Peter
2014-01-01
In everyday life, spatial navigation involving locomotion provides congruent visual, vestibular, and kinesthetic information that need to be integrated. Yet, previous studies on human brain activity during navigation focus on stationary setups, neglecting vestibular and kinesthetic feedback. The aim of our work is to uncover the influence of those sensory modalities on cortical processing. We developed a fully immersive virtual reality setup combined with high-density mobile electroencephalography (EEG). Participants traversed one leg of a triangle, turned on the spot, continued along the second leg, and finally indicated the location of their starting position. Vestibular and kinesthetic information was provided either in combination, as isolated sources of information, or not at all within a 2 × 2 full factorial intra-subjects design. EEG data were processed by clustering independent components, and time-frequency spectrograms were calculated. In parietal, occipital, and temporal clusters, we detected alpha suppression during the turning movement, which is associated with a heightened demand of visuo-attentional processing and closely resembles results reported in previous stationary studies. This decrease is present in all conditions and therefore seems to generalize to more natural settings. Yet, in incongruent conditions, when different sensory modalities did not match, the decrease is significantly stronger. Additionally, in more anterior areas we found that providing only vestibular but no kinesthetic information results in alpha increase. These observations demonstrate that stationary experiments omit important aspects of sensory feedback. Therefore, it is important to develop more natural experimental settings in order to capture a more complete picture of neural correlates of spatial navigation.
A mobile phone system to find crosswalks for visually impaired pedestrians
Shen, Huiying; Chan, Kee-Yip; Coughlan, James; Brabyn, John
2010-01-01
Urban intersections are the most dangerous parts of a blind or visually impaired pedestrian’s travel. A prerequisite for safely crossing an intersection is entering the crosswalk in the right direction and avoiding the danger of straying outside the crosswalk. This paper presents a proof of concept system that seeks to provide such alignment information. The system consists of a standard mobile phone with built-in camera that uses computer vision algorithms to detect any crosswalk visible in the camera’s field of view; audio feedback from the phone then helps the user align him/herself to it. Our prototype implementation on a Nokia mobile phone runs in about one second per image, and is intended for eventual use in a mobile phone system that will aid blind and visually impaired pedestrians in navigating traffic intersections. PMID:20411035
NASA Astrophysics Data System (ADS)
Walsh, Elizabeth Mary; McGowan, Veronica Cassone
2017-01-01
Science education trends promote student engagement in authentic knowledge in practice to tackle personally consequential problems. This study explored how partnering scientists and students on a social media platform supported students' development of disciplinary practice knowledge through practice-based learning with experts during two pilot enactments of a project-based curriculum focusing on the ecological impacts of climate change. Through the online platform, scientists provided feedback on students' infographics, visual argumentation artifacts that use data to communicate about climate change science. We conceptualize the infographics and professional data sets as boundary objects that supported authentic argumentation practices across classroom and professional contexts, but found that student generated data was not robust enough to cross these boundaries. Analysis of the structure and content of the scientists' feedback revealed that when critiquing argumentation, scientists initiated engagement in multiple scientific practices, supporting a holistic rather than discrete model of practice-based learning. While traditional classroom inquiry has emphasized student experimentation, we found that engagement with existing professional data sets provided students with a platform for developing expertise in systemic scientific practices during argument construction. We further found that many students increased the complexity and improved the visual presentation of their arguments after feedback.
Foot placement relies on state estimation during visually guided walking.
Maeda, Rodrigo S; O'Connor, Shawn M; Donelan, J Maxwell; Marigold, Daniel S
2017-02-01
As we walk, we must accurately place our feet to stabilize our motion and to navigate our environment. We must also achieve this accuracy despite imperfect sensory feedback and unexpected disturbances. In this study we tested whether the nervous system uses state estimation to beneficially combine sensory feedback with forward model predictions to compensate for these challenges. Specifically, subjects wore prism lenses during a visually guided walking task, and we used trial-by-trial variation in prism lenses to add uncertainty to visual feedback and induce a reweighting of this input. To expose altered weighting, we added a consistent prism shift that required subjects to adapt their estimate of the visuomotor mapping relationship between a perceived target location and the motor command necessary to step to that position. With added prism noise, subjects responded to the consistent prism shift with smaller initial foot placement error but took longer to adapt, compatible with our mathematical model of the walking task that leverages state estimation to compensate for noise. Much like when we perform voluntary and discrete movements with our arms, it appears our nervous systems uses state estimation during walking to accurately reach our foot to the ground. Accurate foot placement is essential for safe walking. We used computational models and human walking experiments to test how our nervous system achieves this accuracy. We find that our control of foot placement beneficially combines sensory feedback with internal forward model predictions to accurately estimate the body's state. Our results match recent computational neuroscience findings for reaching movements, suggesting that state estimation is a general mechanism of human motor control. Copyright © 2017 the American Physiological Society.
SILVA tree viewer: interactive web browsing of the SILVA phylogenetic guide trees.
Beccati, Alan; Gerken, Jan; Quast, Christian; Yilmaz, Pelin; Glöckner, Frank Oliver
2017-09-30
Phylogenetic trees are an important tool to study the evolutionary relationships among organisms. The huge amount of available taxa poses difficulties in their interactive visualization. This hampers the interaction with the users to provide feedback for the further improvement of the taxonomic framework. The SILVA Tree Viewer is a web application designed for visualizing large phylogenetic trees without requiring the download of any software tool or data files. The SILVA Tree Viewer is based on Web Geographic Information Systems (Web-GIS) technology with a PostgreSQL backend. It enables zoom and pan functionalities similar to Google Maps. The SILVA Tree Viewer enables access to two phylogenetic (guide) trees provided by the SILVA database: the SSU Ref NR99 inferred from high-quality, full-length small subunit sequences, clustered at 99% sequence identity and the LSU Ref inferred from high-quality, full-length large subunit sequences. The Tree Viewer provides tree navigation, search and browse tools as well as an interactive feedback system to collect any kinds of requests ranging from taxonomy to data curation and improving the tool itself.
Design and Evaluation of Shape-Changing Haptic Interfaces for Pedestrian Navigation Assistance.
Spiers, Adam J; Dollar, Aaron M
2017-01-01
Shape-changing interfaces are a category of device capable of altering their form in order to facilitate communication of information. In this work, we present a shape-changing device that has been designed for navigation assistance. 'The Animotus' (previously, 'The Haptic Sandwich' ), resembles a cube with an articulated upper half that is able to rotate and extend (translate) relative to the bottom half, which is fixed in the user's grasp. This rotation and extension, generally felt via the user's fingers, is used to represent heading and proximity to navigational targets. The device is intended to provide an alternative to screen or audio based interfaces for visually impaired, hearing impaired, deafblind, and sighted pedestrians. The motivation and design of the haptic device is presented, followed by the results of a navigation experiment that aimed to determine the role of each device DOF, in terms of facilitating guidance. An additional device, 'The Haptic Taco', which modulated its volume in response to target proximity (negating directional feedback), was also compared. Results indicate that while the heading (rotational) DOF benefited motion efficiency, the proximity (translational) DOF benefited velocity. Combination of the two DOF improved overall performance. The volumetric Taco performed comparably to the Animotus' extension DOF.
Anisotropy of Human Horizontal and Vertical Navigation in Real Space: Behavioral and PET Correlates.
Zwergal, Andreas; Schöberl, Florian; Xiong, Guoming; Pradhan, Cauchy; Covic, Aleksandar; Werner, Philipp; Trapp, Christoph; Bartenstein, Peter; la Fougère, Christian; Jahn, Klaus; Dieterich, Marianne; Brandt, Thomas
2016-10-17
Spatial orientation was tested during a horizontal and vertical real navigation task in humans. Video tracking of eye movements was used to analyse the behavioral strategy and combined with simultaneous measurements of brain activation and metabolism ([18F]-FDG-PET). Spatial navigation performance was significantly better during horizontal navigation. Horizontal navigation was predominantly visually and landmark-guided. PET measurements indicated that glucose metabolism increased in the right hippocampus, bilateral retrosplenial cortex, and pontine tegmentum during horizontal navigation. In contrast, vertical navigation was less reliant on visual and landmark information. In PET, vertical navigation activated the bilateral hippocampus and insula. Direct comparison revealed a relative activation in the pontine tegmentum and visual cortical areas during horizontal navigation and in the flocculus, insula, and anterior cingulate cortex during vertical navigation. In conclusion, these data indicate a functional anisotropy of human 3D-navigation in favor of the horizontal plane. There are common brain areas for both forms of navigation (hippocampus) as well as unique areas such as the retrosplenial cortex, visual cortex (horizontal navigation), flocculus, and vestibular multisensory cortex (vertical navigation). Visually guided landmark recognition seems to be more important for horizontal navigation, while distance estimation based on vestibular input might be more relevant for vertical navigation. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Skordis-Worrall, Jolene; Pulkki-Brännström, Anni-Maria; Utley, Martin; Kembhavi, Gayatri; Bricki, Nouria; Dutoit, Xavier; Rosato, Mikey; Pagel, Christina
2012-12-21
There are calls for low and middle income countries to develop robust health financing policies to increase service coverage. However, existing evidence around financing options is complex and often difficult for policy makers to access. To summarize the evidence on the impact of financing health systems and develop an e-tool to help decision makers navigate the findings. After reviewing the literature, we used thematic analysis to summarize the impact of 7 common health financing mechanisms on 5 common health system goals. Information on the relevance of each study to a user's context was provided by 11 country indicators. A Web-based e-tool was then developed to assist users in navigating the literature review. This tool was evaluated using feedback from early users, collected using an online survey and in-depth interviews with key informants. The e-tool provides graphical summaries that allow a user to assess the following parameters with a single snapshot: the number of relevant studies available in the literature, the heterogeneity of evidence, where key evidence is lacking, and how closely the evidence matches their own context. Users particularly liked the visual display and found navigating the tool intuitive. However there was concern that a lack of evidence on positive impact might be construed as evidence against a financing option and that the tool might over-simplify the available financing options. Complex evidence can be made more easily accessible and potentially more understandable using basic Web-based technology and innovative graphical representations that match findings to the users' goals and context.
Pavlova, Marina; Sokolov, Alexander; Krägeloh-Mann, Ingeborg
2007-02-01
Visual navigation in familiar and unfamiliar surroundings is an essential ingredient of adaptive daily life behavior. Recent brain imaging work helps to recognize that establishing connectivity between brain regions is of importance for successful navigation. Here, we ask whether the ability to navigate is impaired in adolescents who were born premature and suffer congenital bilateral periventricular brain damage that might affect the pathways interconnecting subcortical structures with cortex. Performance on a set of visual labyrinth tasks was significantly worse in patients with periventricular leukomalacia (PVL) as compared with premature-born controls without lesions and term-born adolescents. The ability for visual navigation inversely relates to the severity of motor disability, leg-dominated bilateral spastic cerebral palsy. This agrees with the view that navigation ability substantially improves with practice and might be compromised in individuals with restrictions in active spatial exploration. Visual navigation is negatively linked to the volumetric extent of lesions over the right parietal and frontal periventricular regions. Whereas impairments of visual processing of point-light biological motion are associated in patients with PVL with bilateral parietal periventricular lesions, navigation ability is specifically linked to the frontal lesions in the right hemisphere. We suggest that more anterior periventricular lesions impair the interrelations between the right hippocampus and cortical areas leading to disintegration of neural networks engaged in visual navigation. For the first time, we show that the severity of right frontal periventricular damage and leg-dominated motor disorders can serve as independent predictors of the visual navigation disability.
Low Cost Embedded Stereo System for Underwater Surveys
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.
2017-11-01
This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.
Detecting Traversable Area and Water Hazards for the Visually Impaired with a pRGB-D Sensor
Yang, Kailun; Wang, Kaiwei; Cheng, Ruiqi; Hu, Weijian; Huang, Xiao; Bai, Jian
2017-01-01
The use of RGB-Depth (RGB-D) sensors for assisting visually impaired people (VIP) has been widely reported as they offer portability, function-diversity and cost-effectiveness. However, polarization cues to assist traversability awareness without precautions against stepping into water areas are weak. In this paper, a polarized RGB-Depth (pRGB-D) framework is proposed to detect traversable area and water hazards simultaneously with polarization-color-depth-attitude information to enhance safety during navigation. The approach has been tested on a pRGB-D dataset, which is built for tuning parameters and evaluating the performance. Moreover, the approach has been integrated into a wearable prototype which generates a stereo sound feedback to guide visually impaired people (VIP) follow the prioritized direction to avoid obstacles and water hazards. Furthermore, a preliminary study with ten blindfolded participants suggests its effectivity and reliability. PMID:28817069
Taux: A System for Evaluating Sound Feedback in Navigational Tasks
ERIC Educational Resources Information Center
Lutz, Robert J.
2008-01-01
This thesis presents the design and development of an evaluation system for generating audio displays that provide feedback to persons performing navigation tasks. It first develops the need for such a system by describing existing wayfinding solutions, investigating new electronic location-based methods that have the potential of changing these…
López, David; Oehlberg, Lora; Doger, Candemir; Isenberg, Tobias
2016-05-01
We discuss touch-based navigation of 3D visualizations in a combined monoscopic and stereoscopic viewing environment. We identify a set of interaction modes, and a workflow that helps users transition between these modes to improve their interaction experience. In our discussion we analyze, in particular, the control-display space mapping between the different reference frames of the stereoscopic and monoscopic displays. We show how this mapping supports interactive data exploration, but may also lead to conflicts between the stereoscopic and monoscopic views due to users' movement in space; we resolve these problems through synchronization. To support our discussion, we present results from an exploratory observational evaluation with domain experts in fluid mechanics and structural biology. These experts explored domain-specific datasets using variations of a system that embodies the interaction modes and workflows; we report on their interactions and qualitative feedback on the system and its workflow.
Toth, Adam J; Harris, Laurence R; Zettel, John; Bent, Leah R
2017-02-01
Visuo-vestibular recalibration, in which visual information is used to alter the interpretation of vestibular signals, has been shown to influence both oculomotor control and navigation. Here we investigate whether vision can recalibrate the vestibular feedback used during the re-establishment of equilibrium following a perturbation. The perturbation recovery responses of nine participants were examined following exposure to a period of 11 s of galvanic vestibular stimulation (GVS). During GVS in VISION trials, occlusion spectacles provided 4 s of visual information that enabled participants to correct for the GVS-induced tilt and associate this asymmetric vestibular signal with a visually provided 'upright'. NoVISION trials had no such visual experience. Participants used the visual information to assist in realigning their posture compared to when visual information was not provided (p < 0.01). The initial recovery response to a platform perturbation was not impacted by whether vision had been provided during the preceding GVS, as determined by peak centre of mass and pressure deviations (p = 0.09). However, after using vision to reinterpret the vestibular signal during GVS, final centre of mass and pressure equilibrium positions were significantly shifted compared to trials in which vision was not available (p < 0.01). These findings support previous work identifying a prominent role of vestibular input for re-establishing postural equilibrium following a perturbation. Our work is the first to highlight the capacity for visual feedback to recalibrate the vertical interpretation of vestibular reafference for re-establishing equilibrium following a perturbation. This demonstrates the rapid adaptability of the vestibular reafference signal for postural control.
Address entry while driving: speech recognition versus a touch-screen keyboard.
Tsimhoni, Omer; Smith, Daniel; Green, Paul
2004-01-01
A driving simulator experiment was conducted to determine the effects of entering addresses into a navigation system during driving. Participants drove on roads of varying visual demand while entering addresses. Three address entry methods were explored: word-based speech recognition, character-based speech recognition, and typing on a touch-screen keyboard. For each method, vehicle control and task measures, glance timing, and subjective ratings were examined. During driving, word-based speech recognition yielded the shortest total task time (15.3 s), followed by character-based speech recognition (41.0 s) and touch-screen keyboard (86.0 s). The standard deviation of lateral position when performing keyboard entry (0.21 m) was 60% higher than that for all other address entry methods (0.13 m). Degradation of vehicle control associated with address entry using a touch screen suggests that the use of speech recognition is favorable. Speech recognition systems with visual feedback, however, even with excellent accuracy, are not without performance consequences. Applications of this research include the design of in-vehicle navigation systems as well as other systems requiring significant driver input, such as E-mail, the Internet, and text messaging.
FlyAR: augmented reality supported micro aerial vehicle navigation.
Zollmann, Stefanie; Hoppe, Christof; Langlotz, Tobias; Reitmayr, Gerhard
2014-04-01
Micro aerial vehicles equipped with high-resolution cameras can be used to create aerial reconstructions of an area of interest. In that context automatic flight path planning and autonomous flying is often applied but so far cannot fully replace the human in the loop, supervising the flight on-site to assure that there are no collisions with obstacles. Unfortunately, this workflow yields several issues, such as the need to mentally transfer the aerial vehicles position between 2D map positions and the physical environment, and the complicated depth perception of objects flying in the distance. Augmented Reality can address these issues by bringing the flight planning process on-site and visualizing the spatial relationship between the planned or current positions of the vehicle and the physical environment. In this paper, we present Augmented Reality supported navigation and flight planning of micro aerial vehicles by augmenting the users view with relevant information for flight planning and live feedback for flight supervision. Furthermore, we introduce additional depth hints supporting the user in understanding the spatial relationship of virtual waypoints in the physical world and investigate the effect of these visualization techniques on the spatial understanding.
Voluntarily controlled but not merely observed visual feedback affects postural sway
Asai, Tomohisa; Hiromitsu, Kentaro; Imamizu, Hiroshi
2018-01-01
Online stabilization of human standing posture utilizes multisensory afferences (e.g., vision). Whereas visual feedback of spontaneous postural sway can stabilize postural control especially when observers concentrate on their body and intend to minimize postural sway, the effect of intentional control of visual feedback on postural sway itself remains unclear. This study assessed quiet standing posture in healthy adults voluntarily controlling or merely observing visual feedback. The visual feedback (moving square) had either low or high gain and was either horizontally flipped or not. Participants in the voluntary-control group were instructed to minimize their postural sway while voluntarily controlling visual feedback, whereas those in the observation group were instructed to minimize their postural sway while merely observing visual feedback. As a result, magnified and flipped visual feedback increased postural sway only in the voluntary-control group. Furthermore, regardless of the instructions and feedback manipulations, the experienced sense of control over visual feedback positively correlated with the magnitude of postural sway. We suggest that voluntarily controlled, but not merely observed, visual feedback is incorporated into the feedback control system for posture and begins to affect postural sway. PMID:29682421
Kadoury, Samuel; Abi-Jaoudeh, Nadine; Levy, Elliot B.; Maass-Moreno, Roberto; Krücker, Jochen; Dalal, Sandeep; Xu, Sheng; Glossop, Neil; Wood, Bradford J.
2011-01-01
Purpose: To assess the feasibility of combined electromagnetic device tracking and computed tomography (CT)/ultrasonography (US)/fluorine 18 fluorodeoxyglucose (FDG) positron emission tomography (PET) fusion for real-time feedback during percutaneous and intraoperative biopsies and hepatic radiofrequency (RF) ablation. Materials and Methods: In this HIPAA-compliant, institutional review board–approved prospective study with written informed consent, 25 patients (17 men, eight women) underwent 33 percutaneous and three intraoperative biopsies of 36 FDG-avid targets between November 2007 and August 2010. One patient underwent biopsy and RF ablation of an FDG-avid hepatic focus. Targets demonstrated heterogeneous FDG uptake or were not well seen or were totally inapparent at conventional imaging. Preprocedural FDG PET scans were rigidly registered through a semiautomatic method to intraprocedural CT scans. Coaxial biopsy needle introducer tips and RF ablation electrode guider needle tips containing electromagnetic sensor coils were spatially tracked through an electromagnetic field generator. Real-time US scans were registered through a fiducial-based method, allowing US scans to be fused with intraprocedural CT and preacquired FDG PET scans. A visual display of US/CT image fusion with overlaid coregistered FDG PET targets was used for guidance; navigation software enabled real-time biopsy needle and needle electrode navigation and feedback. Results: Successful fusion of real-time US to coregistered CT and FDG PET scans was achieved in all patients. Thirty-one of 36 biopsies were diagnostic (malignancy in 18 cases, benign processes in 13 cases). RF ablation resulted in resolution of targeted FDG avidity, with no local treatment failure during short follow-up (56 days). Conclusion: Combined electromagnetic device tracking and image fusion with real-time feedback may facilitate biopsies and ablations of focal FDG PET abnormalities that would be challenging with conventional image guidance. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101985/-/DC1 PMID:21734159
Patterns and Sequences: Interactive Exploration of Clickstreams to Understand Common Visitor Paths.
Liu, Zhicheng; Wang, Yang; Dontcheva, Mira; Hoffman, Matthew; Walker, Seth; Wilson, Alan
2017-01-01
Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback.
Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach
Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei
2017-01-01
One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft’s real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach. PMID:28629189
Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach.
Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei
2017-06-19
[-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft's real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach.
Visual Navigation in Nocturnal Insects.
Warrant, Eric; Dacke, Marie
2016-05-01
Despite their tiny eyes and brains, nocturnal insects have evolved a remarkable capacity to visually navigate at night. Whereas some use moonlight or the stars as celestial compass cues to maintain a straight-line course, others use visual landmarks to navigate to and from their nest. These impressive abilities rely on highly sensitive compound eyes and specialized visual processing strategies in the brain. ©2016 Int. Union Physiol. Sci./Am. Physiol. Soc.
Visual map and instruction-based bicycle navigation: a comparison of effects on behaviour.
de Waard, Dick; Westerhuis, Frank; Joling, Danielle; Weiland, Stella; Stadtbäumer, Ronja; Kaltofen, Leonie
2017-09-01
Cycling with a classic paper map was compared with navigating with a moving map displayed on a smartphone, and with auditory, and visual turn-by-turn route guidance. Spatial skills were found to be related to navigation performance, however only when navigating from a paper or electronic map, not with turn-by-turn (instruction based) navigation. While navigating, 25% of the time cyclists fixated at the devices that present visual information. Navigating from a paper map required most mental effort and both young and older cyclists preferred electronic over paper map navigation. In particular a turn-by-turn dedicated guidance device was favoured. Visual maps are in particular useful for cyclists with higher spatial skills. Turn-by-turn information is used by all cyclists, and it is useful to make these directions available in all devices. Practitioner Summary: Electronic navigation devices are preferred over a paper map. People with lower spatial skills benefit most from turn-by-turn guidance information, presented either auditory or on a dedicated device. People with higher spatial skills perform well with all devices. It is advised to keep in mind that all users benefit from turn-by-turn information when developing a navigation device for cyclists.
Event Display for the Visualization of CMS Events
NASA Astrophysics Data System (ADS)
Bauerdick, L. A. T.; Eulisse, G.; Jones, C. D.; Kovalskyi, D.; McCauley, T.; Mrak Tadel, A.; Muelmenstaedt, J.; Osborne, I.; Tadel, M.; Tu, Y.; Yagil, A.
2011-12-01
During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.
Neural Circuit to Integrate Opposing Motions in the Visual Field.
Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander
2015-07-16
When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.
Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?
McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh
2014-05-01
Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.
Vibrotactile Feedback for Brain-Computer Interface Operation
Cincotti, Febo; Kauhanen, Laura; Aloise, Fabio; Palomäki, Tapio; Caporusso, Nicholas; Jylänki, Pasi; Mattia, Donatella; Babiloni, Fabio; Vanacker, Gerolf; Nuttin, Marnix; Marciani, Maria Grazia; Millán, José del R.
2007-01-01
To be correctly mastered, brain-computer interfaces (BCIs) need an uninterrupted flow of feedback to the user. This feedback is usually delivered through the visual channel. Our aim was to explore the benefits of vibrotactile feedback during users' training and control of EEG-based BCI applications. A protocol for delivering vibrotactile feedback, including specific hardware and software arrangements, was specified. In three studies with 33 subjects (including 3 with spinal cord injury), we compared vibrotactile and visual feedback, addressing: (I) the feasibility of subjects' training to master their EEG rhythms using tactile feedback; (II) the compatibility of this form of feedback in presence of a visual distracter; (III) the performance in presence of a complex visual task on the same (visual) or different (tactile) sensory channel. The stimulation protocol we developed supports a general usage of the tactors; preliminary experimentations. All studies indicated that the vibrotactile channel can function as a valuable feedback modality with reliability comparable to the classical visual feedback. Advantages of using a vibrotactile feedback emerged when the visual channel was highly loaded by a complex task. In all experiments, vibrotactile feedback felt, after some training, more natural for both controls and SCI users. PMID:18354734
Feature-Specific Organization of Feedback Pathways in Mouse Visual Cortex.
Huh, Carey Y L; Peach, John P; Bennett, Corbett; Vega, Roxana M; Hestrin, Shaul
2018-01-08
Higher and lower cortical areas in the visual hierarchy are reciprocally connected [1]. Although much is known about how feedforward pathways shape receptive field properties of visual neurons, relatively little is known about the role of feedback pathways in visual processing. Feedback pathways are thought to carry top-down signals, including information about context (e.g., figure-ground segmentation and surround suppression) [2-5], and feedback has been demonstrated to sharpen orientation tuning of neurons in the primary visual cortex (V1) [6, 7]. However, the response characteristics of feedback neurons themselves and how feedback shapes V1 neurons' tuning for other features, such as spatial frequency (SF), remain largely unknown. Here, using a retrograde virus, targeted electrophysiological recordings, and optogenetic manipulations, we show that putatively feedback neurons in layer 5 (hereafter "L5 feedback") in higher visual areas, AL (anterolateral area) and PM (posteromedial area), display distinct visual properties in awake head-fixed mice. AL L5 feedback neurons prefer significantly lower SF (mean: 0.04 cycles per degree [cpd]) compared to PM L5 feedback neurons (0.15 cpd). Importantly, silencing AL L5 feedback reduced visual responses of V1 neurons preferring low SF (mean change in firing rate: -8.0%), whereas silencing PM L5 feedback suppressed responses of high-SF-preferring V1 neurons (-20.4%). These findings suggest that feedback connections from higher visual areas convey distinctly tuned visual inputs to V1 that serve to boost V1 neurons' responses to SF. Such like-to-like functional organization may represent an important feature of feedback pathways in sensory systems and in the nervous system in general. Copyright © 2017 Elsevier Ltd. All rights reserved.
A navigation system for the visually impaired using colored navigation lines and RFID tags.
Seto, First Tatsuya
2009-01-01
In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. Our developed instrument consists of a navigation system and a map information system. These systems are installed on a white cane. Our navigation system can follow a colored navigation line that is set on the floor. In this system, a color sensor installed on the tip of a white cane senses the colored navigation line, and the system informs the visually impaired that he/she is walking along the navigation line by vibration. The color recognition system is controlled by a one-chip microprocessor and this system can discriminate 6 colored navigation lines. RFID tags and a receiver for these tags are used in the map information system. The RFID tags and the RFID tag receiver are also installed on a white cane. The receiver receives tag information and notifies map information to the user by mp3 formatted pre-recorded voice. Three normal subjects who were blindfolded with an eye mask were tested with this system. All of them were able to walk along the navigation line. The performance of the map information system was good. Therefore, our system will be extremely valuable in supporting the activities of the visually impaired.
Effects of aging on pointing movements under restricted visual feedback conditions.
Zhang, Liancun; Yang, Jiajia; Inai, Yoshinobu; Huang, Qiang; Wu, Jinglong
2015-04-01
The goal of this study was to investigate the effects of aging on pointing movements under restricted visual feedback of hand movement and target location. Fifteen young subjects and fifteen elderly subjects performed pointing movements under four restricted visual feedback conditions that included full visual feedback of hand movement and target location (FV), no visual feedback of hand movement and target location condition (NV), no visual feedback of hand movement (NM) and no visual feedback of target location (NT). This study suggested that Fitts' law applied for pointing movements of the elderly adults under different visual restriction conditions. Moreover, significant main effect of aging on movement times has been found in all four tasks. The peripheral and central changes may be the key factors for these different characteristics. Furthermore, no significant main effects of age on the mean accuracy rate under condition of restricted visual feedback were found. The present study suggested that the elderly subjects made a very similar use of the available sensory information as young subjects under restricted visual feedback conditions. In addition, during the pointing movement, information about the hand's movement was more useful than information about the target location for young and elderly subjects. Copyright © 2014 Elsevier B.V. All rights reserved.
Kim, Seung-Jae; Ogilvie, Mitchell; Shimabukuro, Nathan; Stewart, Trevor; Shin, Joon-Ho
2015-09-01
Visual feedback can be used during gait rehabilitation to improve the efficacy of training. We presented a paradigm called visual feedback distortion; the visual representation of step length was manipulated during treadmill walking. Our prior work demonstrated that an implicit distortion of visual feedback of step length entails an unintentional adaptive process in the subjects' spatial gait pattern. Here, we investigated whether the implicit visual feedback distortion, versus conscious correction, promotes efficient locomotor adaptation that relates to greater retention of a task. Thirteen healthy subjects were studied under two conditions: (1) we implicitly distorted the visual representation of their gait symmetry over 14 min, and (2) with help of visual feedback, subjects were told to walk on the treadmill with the intent of attaining the gait asymmetry observed during the first implicit trial. After adaptation, the visual feedback was removed while subjects continued walking normally. Over this 6-min period, retention of preserved asymmetric pattern was assessed. We found that there was a greater retention rate during the implicit distortion trial than that of the visually guided conscious modulation trial. This study highlights the important role of implicit learning in the context of gait rehabilitation by demonstrating that training with implicit visual feedback distortion may produce longer lasting effects. This suggests that using visual feedback distortion could improve the effectiveness of treadmill rehabilitation processes by influencing the retention of motor skills.
Visual Navigation during Colony Emigration by the Ant Temnothorax rugatulus
Bowens, Sean R.; Glatt, Daniel P.; Pratt, Stephen C.
2013-01-01
Many ants rely on both visual cues and self-generated chemical signals for navigation, but their relative importance varies across species and context. We evaluated the roles of both modalities during colony emigration by Temnothorax rugatulus. Colonies were induced to move from an old nest in the center of an arena to a new nest at the arena edge. In the midst of the emigration the arena floor was rotated 60°around the old nest entrance, thus displacing any substrate-bound odor cues while leaving visual cues unchanged. This manipulation had no effect on orientation, suggesting little influence of substrate cues on navigation. When this rotation was accompanied by the blocking of most visual cues, the ants became highly disoriented, suggesting that they did not fall back on substrate cues even when deprived of visual information. Finally, when the substrate was left in place but the visual surround was rotated, the ants' subsequent headings were strongly rotated in the same direction, showing a clear role for visual navigation. Combined with earlier studies, these results suggest that chemical signals deposited by Temnothorax ants serve more for marking of familiar territory than for orientation. The ants instead navigate visually, showing the importance of this modality even for species with small eyes and coarse visual acuity. PMID:23671713
Image and information management system
NASA Technical Reports Server (NTRS)
Robertson, Tina L. (Inventor); Raney, Michael C. (Inventor); Dougherty, Dennis M. (Inventor); Kent, Peter C. (Inventor); Brucker, Russell X. (Inventor); Lampert, Daryl A. (Inventor)
2009-01-01
A system and methods through which pictorial views of an object's configuration, arranged in a hierarchical fashion, are navigated by a person to establish a visual context within the configuration. The visual context is automatically translated by the system into a set of search parameters driving retrieval of structured data and content (images, documents, multimedia, etc.) associated with the specific context. The system places ''hot spots'', or actionable regions, on various portions of the pictorials representing the object. When a user interacts with an actionable region, a more detailed pictorial from the hierarchy is presented representing that portion of the object, along with real-time feedback in the form of a popup pane containing information about that region, and counts-by-type reflecting the number of items that are available within the system associated with the specific context and search filters established at that point in time.
Image and information management system
NASA Technical Reports Server (NTRS)
Robertson, Tina L. (Inventor); Kent, Peter C. (Inventor); Raney, Michael C. (Inventor); Dougherty, Dennis M. (Inventor); Brucker, Russell X. (Inventor); Lampert, Daryl A. (Inventor)
2007-01-01
A system and methods through which pictorial views of an object's configuration, arranged in a hierarchical fashion, are navigated by a person to establish a visual context within the configuration. The visual context is automatically translated by the system into a set of search parameters driving retrieval of structured data and content (images, documents, multimedia, etc.) associated with the specific context. The system places hot spots, or actionable regions, on various portions of the pictorials representing the object. When a user interacts with an actionable region, a more detailed pictorial from the hierarchy is presented representing that portion of the object, along with real-time feedback in the form of a popup pane containing information about that region, and counts-by-type reflecting the number of items that are available within the system associated with the specific context and search filters established at that point in time.
Speed but not amplitude of visual feedback exacerbates force variability in older adults.
Kim, Changki; Yacoubi, Basma; Christou, Evangelos A
2018-06-23
Magnification of visual feedback (VF) impairs force control in older adults. In this study, we aimed to determine whether the age-associated increase in force variability with magnification of visual feedback is a consequence of increased amplitude or speed of visual feedback. Seventeen young and 18 older adults performed a constant isometric force task with the index finger at 5% of MVC. We manipulated the vertical (force gain) and horizontal (time gain) aspect of the visual feedback so participants performed the task with the following VF conditions: (1) high amplitude-fast speed; (2) low amplitude-slow speed; (3) high amplitude-slow speed. Changing the visual feedback from low amplitude-slow speed to high amplitude-fast speed increased force variability in older adults but decreased it in young adults (P < 0.01). Changing the visual feedback from low amplitude-slow speed to high amplitude-slow speed did not alter force variability in older adults (P > 0.2), but decreased it in young adults (P < 0.01). Changing the visual feedback from high amplitude-slow speed to high amplitude-fast speed increased force variability in older adults (P < 0.01) but did not alter force variability in young adults (P > 0.2). In summary, increased force variability in older adults with magnification of visual feedback was evident only when the speed of visual feedback increased. Thus, we conclude that in older adults deficits in the rate of processing visual information and not deficits in the processing of more visual information impair force control.
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
A Visual-Cue-Dependent Memory Circuit for Place Navigation.
Qin, Han; Fu, Ling; Hu, Bo; Liao, Xiang; Lu, Jian; He, Wenjing; Liang, Shanshan; Zhang, Kuan; Li, Ruijie; Yao, Jiwei; Yan, Junan; Chen, Hao; Jia, Hongbo; Zott, Benedikt; Konnerth, Arthur; Chen, Xiaowei
2018-06-05
The ability to remember and to navigate to safe places is necessary for survival. Place navigation is known to involve medial entorhinal cortex (MEC)-hippocampal connections. However, learning-dependent changes in neuronal activity in the distinct circuits remain unknown. Here, by using optic fiber photometry in freely behaving mice, we discovered the experience-dependent induction of a persistent-task-associated (PTA) activity. This PTA activity critically depends on learned visual cues and builds up selectively in the MEC layer II-dentate gyrus, but not in the MEC layer III-CA1 pathway, and its optogenetic suppression disrupts navigation to the target location. The findings suggest that the visual system, the MEC layer II, and the dentate gyrus are essential hubs of a memory circuit for visually guided navigation. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
The development of a white cane which navigates the visually impaired.
Shiizu, Yuriko; Hirahara, Yoshiaki; Yanashima, Kenji; Magatani, Kazushige
2007-01-01
In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. This system is composed of colored navigation lines, RFID tags and an intelligent white cane. In our system, some colored marking tapes are set on along the walking route. These lines are called navigation line. And also RFID tags are set on this line at each landmark point. The intelligent white cane can sense a color of navigation line and receive tag information. By vibration of white cane, the system informs the visually impaired that he/she is walking along the navigation line. At the landmark point, the system also notifies area information to him/her by pre-recorded voice. Ten normal subjects who were blind folded with an eye mask were tested with this system. All of them were able to walk along the navigation line. The performance of the area information system was good. Therefore, we have concluded that our system will be extremely valuable in supporting the activities of the visually impaired.
Effect of visual feedback on brain activation during motor tasks: an FMRI study.
Noble, Jeremy W; Eng, Janice J; Boyd, Lara A
2013-07-01
This study examined the effect of visual feedback and force level on the neural mechanisms responsible for the performance of a motor task. We used a voxel-wise fMRI approach to determine the effect of visual feedback (with and without) during a grip force task at 35% and 70% of maximum voluntary contraction. Two areas (contralateral rostral premotor cortex and putamen) displayed an interaction between force and feedback conditions. When the main effect of feedback condition was analyzed, higher activation when visual feedback was available was found in 22 of the 24 active brain areas, while the two other regions (contralateral lingual gyrus and ipsilateral precuneus) showed greater levels of activity when no visual feedback was available. The results suggest that there is a potentially confounding influence of visual feedback on brain activation during a motor task, and for some regions, this is dependent on the level of force applied.
Self-motivated visual scanning predicts flexible navigation in a virtual environment.
Ploran, Elisabeth J; Bevitt, Jacob; Oshiro, Jaris; Parasuraman, Raja; Thompson, James C
2014-01-01
The ability to navigate flexibly (e.g., reorienting oneself based on distal landmarks to reach a learned target from a new position) may rely on visual scanning during both initial experiences with the environment and subsequent test trials. Reliance on visual scanning during navigation harkens back to the concept of vicarious trial and error, a description of the side-to-side head movements made by rats as they explore previously traversed sections of a maze in an attempt to find a reward. In the current study, we examined if visual scanning predicted the extent to which participants would navigate to a learned location in a virtual environment defined by its position relative to distal landmarks. Our results demonstrated a significant positive relationship between the amount of visual scanning and participant accuracy in identifying the trained target location from a new starting position as long as the landmarks within the environment remain consistent with the period of original learning. Our findings indicate that active visual scanning of the environment is a deliberative attentional strategy that supports the formation of spatial representations for flexible navigation.
Visual navigation using edge curve matching for pinpoint planetary landing
NASA Astrophysics Data System (ADS)
Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei
2018-05-01
Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.
Behavioral and neural effects of congruency of visual feedback during short-term motor learning.
Ossmy, Ori; Mukamel, Roy
2018-05-15
Visual feedback can facilitate or interfere with movement execution. Here, we describe behavioral and neural mechanisms by which the congruency of visual feedback during physical practice of a motor skill modulates subsequent performance gains. 18 healthy subjects learned to execute rapid sequences of right hand finger movements during fMRI scans either with or without visual feedback. Feedback consisted of a real-time, movement-based display of virtual hands that was either congruent (right virtual hand movement), or incongruent (left virtual hand movement yoked to the executing right hand). At the group level, right hand performance gains following training with congruent visual feedback were significantly higher relative to training without visual feedback. Conversely, performance gains following training with incongruent visual feedback were significantly lower. Interestingly, across individual subjects these opposite effects correlated. Activation in the Supplementary Motor Area (SMA) during training corresponded to individual differences in subsequent performance gains. Furthermore, functional coupling of SMA with visual cortices predicted individual differences in behavior. Our results demonstrate that some individuals are more sensitive than others to congruency of visual feedback during short-term motor learning and that neural activation in SMA correlates with such inter-individual differences. Copyright © 2017 Elsevier Inc. All rights reserved.
Vision for navigation: What can we learn from ants?
Graham, Paul; Philippides, Andrew
2017-09-01
The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
33 CFR 175.130 - Visual distress signals accepted.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Visual distress signals accepted... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.130 Visual distress signals accepted. (a) Any of the following signals, when carried in the number required, can be used to meet the...
33 CFR 175.130 - Visual distress signals accepted.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Visual distress signals accepted... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.130 Visual distress signals accepted. (a) Any of the following signals, when carried in the number required, can be used to meet the...
33 CFR 175.130 - Visual distress signals accepted.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Visual distress signals accepted... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.130 Visual distress signals accepted. (a) Any of the following signals, when carried in the number required, can be used to meet the...
33 CFR 175.110 - Visual distress signals required.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...
33 CFR 175.110 - Visual distress signals required.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...
33 CFR 175.110 - Visual distress signals required.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...
33 CFR 175.110 - Visual distress signals required.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Visual distress signals required... (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.110 Visual distress signals... signals selected from the list in § 175.130 or the alternatives in § 175.135, in the number required, are...
Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel Robert; Namer-Furstenberg, Rinat; Amedi, Amir
2014-01-01
Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.
van der Kuil, Milan N. A.; Visser-Meily, Johanna M. A.; Evers, Andrea W. M.; van der Ham, Ineke J. M.
2018-01-01
Acquired brain injury patients often report navigation impairments. A cognitive rehabilitation therapy has been designed in the form of a serious game. The aim of the serious game is to aid patients in the development of compensatory navigation strategies by providing exercises in 3D virtual environments on their home computers. The objective of this study was to assess the usability of three critical gaming attributes: movement control in 3D virtual environments, instruction modality and feedback timing. Thirty acquired brain injury patients performed three tasks in which objective measures of usability were obtained. Mouse controlled movement was compared to keyboard controlled movement in a navigation task. Text-based instructions were compared to video-based instructions in a knowledge acquisition task. The effect of feedback timing on performance and motivation was examined in a navigation training game. Subjective usability ratings of all design options were assessed using questionnaires. Results showed that mouse controlled interaction in 3D environments is more effective than keyboard controlled interaction. Patients clearly preferred video-based instructions over text-based instructions, even though video-based instructions were not more effective in context of knowledge acquisition and comprehension. No effect of feedback timing was found on performance and motivation in games designed to train navigation abilities. Overall appreciation of the serious game was positive. The results provide valuable insights in the design choices that facilitate the transfer of skills from serious games to real-life situations. PMID:29922196
van der Kuil, Milan N A; Visser-Meily, Johanna M A; Evers, Andrea W M; van der Ham, Ineke J M
2018-01-01
Acquired brain injury patients often report navigation impairments. A cognitive rehabilitation therapy has been designed in the form of a serious game. The aim of the serious game is to aid patients in the development of compensatory navigation strategies by providing exercises in 3D virtual environments on their home computers. The objective of this study was to assess the usability of three critical gaming attributes: movement control in 3D virtual environments, instruction modality and feedback timing. Thirty acquired brain injury patients performed three tasks in which objective measures of usability were obtained. Mouse controlled movement was compared to keyboard controlled movement in a navigation task. Text-based instructions were compared to video-based instructions in a knowledge acquisition task. The effect of feedback timing on performance and motivation was examined in a navigation training game. Subjective usability ratings of all design options were assessed using questionnaires. Results showed that mouse controlled interaction in 3D environments is more effective than keyboard controlled interaction. Patients clearly preferred video-based instructions over text-based instructions, even though video-based instructions were not more effective in context of knowledge acquisition and comprehension. No effect of feedback timing was found on performance and motivation in games designed to train navigation abilities. Overall appreciation of the serious game was positive. The results provide valuable insights in the design choices that facilitate the transfer of skills from serious games to real-life situations.
NASA Astrophysics Data System (ADS)
Yang, Zhixiao; Ito, Kazuyuki; Saijo, Kazuhiko; Hirotsune, Kazuyuki; Gofuku, Akio; Matsuno, Fumitoshi
This paper aims at constructing an efficient interface being similar to those widely used in human daily life, to fulfill the need of many volunteer rescuers operating rescue robots at large-scale disaster sites. The developed system includes a force feedback steering wheel interface and an artificial neural network (ANN) based mouse-screen interface. The former consists of a force feedback steering control and a six monitors’ wall. It provides a manual operation like driving cars to navigate a rescue robot. The latter consists of a mouse and a camera’s view displayed in a monitor. It provides a semi-autonomous operation by mouse clicking to navigate a rescue robot. Results of experiments show that a novice volunteer can skillfully navigate a tank rescue robot through both interfaces after 20 to 30 minutes of learning their operation respectively. The steering wheel interface has high navigating speed in open areas, without restriction of terrains and surface conditions of a disaster site. The mouse-screen interface is good at exact navigation in complex structures, while bringing little tension to operators. The two interfaces are designed to switch into each other at any time to provide a combined efficient navigation method.
Seeing the hand while reaching speeds up on-line responses to a sudden change in target position
Reichenbach, Alexandra; Thielscher, Axel; Peer, Angelika; Bülthoff, Heinrich H; Bresciani, Jean-Pierre
2009-01-01
Goal-directed movements are executed under the permanent supervision of the central nervous system, which continuously processes sensory afferents and triggers on-line corrections if movement accuracy seems to be compromised. For arm reaching movements, visual information about the hand plays an important role in this supervision, notably improving reaching accuracy. Here, we tested whether visual feedback of the hand affects the latency of on-line responses to an external perturbation when reaching for a visual target. Two types of perturbation were used: visual perturbation consisted in changing the spatial location of the target and kinesthetic perturbation in applying a force step to the reaching arm. For both types of perturbation, the hand trajectory and the electromyographic (EMG) activity of shoulder muscles were analysed to assess whether visual feedback of the hand speeds up on-line corrections. Without visual feedback of the hand, on-line responses to visual perturbation exhibited the longest latency. This latency was reduced by about 10% when visual feedback of the hand was provided. On the other hand, the latency of on-line responses to kinesthetic perturbation was independent of the availability of visual feedback of the hand. In a control experiment, we tested the effect of visual feedback of the hand on visual and kinesthetic two-choice reaction times – for which coordinate transformation is not critical. Two-choice reaction times were never facilitated by visual feedback of the hand. Taken together, our results suggest that visual feedback of the hand speeds up on-line corrections when the position of the visual target with respect to the body must be re-computed during movement execution. This facilitation probably results from the possibility to map hand- and target-related information in a common visual reference frame. PMID:19675067
Satellite Imagery Assisted Road-Based Visual Navigation System
NASA Astrophysics Data System (ADS)
Volkova, A.; Gibbens, P. W.
2016-06-01
There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used
The effect of multimodal and enriched feedback on SMR-BCI performance.
Sollfrank, T; Ramsay, A; Perdikis, S; Williamson, J; Murray-Smith, R; Leeb, R; Millán, J D R; Kübler, A
2016-01-01
This study investigated the effect of multimodal (visual and auditory) continuous feedback with information about the uncertainty of the input signal on motor imagery based BCI performance. A liquid floating through a visualization of a funnel (funnel feedback) provided enriched visual or enriched multimodal feedback. In a between subject design 30 healthy SMR-BCI naive participants were provided with either conventional bar feedback (CB), or visual funnel feedback (UF), or multimodal (visual and auditory) funnel feedback (MF). Subjects were required to imagine left and right hand movement and were trained to control the SMR based BCI for five sessions on separate days. Feedback accuracy varied largely between participants. The MF feedback lead to a significantly better performance in session 1 as compared to the CB feedback and could significantly enhance motivation and minimize frustration in BCI use across the five training sessions. The present study demonstrates that the BCI funnel feedback allows participants to modulate sensorimotor EEG rhythms. Participants were able to control the BCI with the funnel feedback with better performance during the initial session and less frustration compared to the CB feedback. The multimodal funnel feedback provides an alternative to the conventional cursorbar feedback for training subjects to modulate their sensorimotor rhythms. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Liao, Pin-Chao; Sun, Xinlu; Liu, Mei; Shih, Yu-Nien
2018-01-11
Navigated safety inspection based on task-specific checklists can increase the hazard detection rate, theoretically with interference from scene complexity. Visual clutter, a proxy of scene complexity, can theoretically impair visual search performance, but its impact on the effect of safety inspection performance remains to be explored for the optimization of navigated inspection. This research aims to explore whether the relationship between working memory and hazard detection rate is moderated by visual clutter. Based on a perceptive model of hazard detection, we: (a) developed a mathematical influence model for construction hazard detection; (b) designed an experiment to observe the performance of hazard detection rate with adjusted working memory under different levels of visual clutter, while using an eye-tracking device to observe participants' visual search processes; (c) utilized logistic regression to analyze the developed model under various visual clutter. The effect of a strengthened working memory on the detection rate through increased search efficiency is more apparent in high visual clutter. This study confirms the role of visual clutter in construction-navigated inspections, thus serving as a foundation for the optimization of inspection planning.
Maestas, Gabrielle; Hu, Jiyao; Trevino, Jessica; Chunduru, Pranathi; Kim, Seung-Jae; Lee, Hyunglae
2018-01-01
The use of visual feedback in gait rehabilitation has been suggested to promote recovery of locomotor function by incorporating interactive visual components. Our prior work demonstrated that visual feedback distortion of changes in step length symmetry entails an implicit or unconscious adaptive process in the subjects’ spatial gait patterns. We investigated whether the effect of the implicit visual feedback distortion would persist at three different walking speeds (slow, self-preferred and fast speeds) and how different walking speeds would affect the amount of adaption. In the visual feedback distortion paradigm, visual vertical bars portraying subjects’ step lengths were distorted so that subjects perceived their step lengths to be asymmetric during testing. Measuring the adjustments in step length during the experiment showed that healthy subjects made spontaneous modulations away from actual symmetry in response to the implicit visual distortion, no matter the walking speed. In all walking scenarios, the effects of implicit distortion became more significant at higher distortion levels. In addition, the amount of adaptation induced by the visual distortion was significantly greater during walking at preferred or slow speed than at the fast speed. These findings indicate that although a link exists between supraspinal function through visual system and human locomotion, sensory feedback control for locomotion is speed-dependent. Ultimately, our results support the concept that implicit visual feedback can act as a dominant form of feedback in gait modulation, regardless of speed. PMID:29632481
Anson, Eric; Rosenberg, Russell; Agada, Peter; Kiemel, Tim; Jeka, John
2013-11-26
Most current applications of visual feedback to improve postural control are limited to a fixed base of support and produce mixed results regarding improved postural control and transfer to functional tasks. Currently there are few options available to provide visual feedback regarding trunk motion while walking. We have developed a low cost platform to provide visual feedback of trunk motion during walking. Here we investigated whether augmented visual position feedback would reduce trunk movement variability in both young and older healthy adults. The subjects who participated were 10 young and 10 older adults. Subjects walked on a treadmill under conditions of visual position feedback and no feedback. The visual feedback consisted of anterior-posterior (AP) and medial-lateral (ML) position of the subject's trunk during treadmill walking. Fourier transforms of the AP and ML trunk kinematics were used to calculate power spectral densities which were integrated as frequency bins "below the gait cycle" and "gait cycle and above" for analysis purposes. Visual feedback reduced movement power at very low frequencies for lumbar and neck translation but not trunk angle in both age groups. At very low frequencies of body movement, older adults had equivalent levels of movement variability with feedback as young adults without feedback. Lower variability was specific to translational (not angular) trunk movement. Visual feedback did not affect any of the measured lower extremity gait pattern characteristics of either group, suggesting that changes were not invoked by a different gait pattern. Reduced translational variability while walking on the treadmill reflects more precise control maintaining a central position on the treadmill. Such feedback may provide an important technique to augment rehabilitation to minimize body translation while walking. Individuals with poor balance during walking may benefit from this type of training to enhance path consistency during over-ground locomotion.
Computerized visual feedback: an adjunct to robotic-assisted gait training.
Banz, Raphael; Bolliger, Marc; Colombo, Gery; Dietz, Volker; Lünenburger, Lars
2008-10-01
Robotic devices for walking rehabilitation allow new possibilities for providing performance-related information to patients during gait training. Based on motor learning principles, augmented feedback during robotic-assisted gait training might improve the rehabilitation process used to regain walking function. This report presents a method to provide visual feedback implemented in a driven gait orthosis (DGO). The purpose of the study was to compare the immediate effect on motor output in subjects during robotic-assisted gait training when they used computerized visual feedback and when they followed verbal instructions of a physical therapist. Twelve people with neurological gait disorders due to incomplete spinal cord injury participated. Subjects were instructed to walk within the DGO in 2 different conditions. They were asked to increase their motor output by following the instructions of a therapist and by observing visual feedback. In addition, the subjects' opinions about using visual feedback were investigated by a questionnaire. Computerized visual feedback and verbal instructions by the therapist were observed to result in a similar change in motor output in subjects when walking within the DGO. Subjects reported that they were more motivated and concentrated on their movements when using computerized visual feedback compared with when no form of feedback was provided. Computerized visual feedback is a valuable adjunct to robotic-assisted gait training. It represents a relevant tool to increase patients' motor output, involvement, and motivation during gait training, similar to verbal instructions by a therapist.
Applicability of Deep-Learning Technology for Relative Object-Based Navigation
2017-09-01
burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing...possible selections for navigating an unmanned ground vehicle (UGV) is through real- time visual odometry. To navigate in such an environment, the UGV...UGV) is through real- time visual odometry. To navigate in such an environment, the UGV needs to be able to detect, identify, and relate the static
Insect navigation: do ants live in the now?
Graham, Paul; Mangan, Michael
2015-03-01
Visual navigation is a critical behaviour for many animals, and it has been particularly well studied in ants. Decades of ant navigation research have uncovered many ways in which efficient navigation can be implemented in small brains. For example, ants show us how visual information can drive navigation via procedural rather than map-like instructions. Two recent behavioural observations highlight interesting adaptive ways in which ants implement visual guidance. Firstly, it has been shown that the systematic nest searches of ants can be biased by recent experience of familiar scenes. Secondly, ants have been observed to show temporary periods of confusion when asked to repeat a route segment, even if that route segment is very familiar. Taken together, these results indicate that the navigational decisions of ants take into account their recent experiences as well as the currently perceived environment. © 2015. Published by The Company of Biologists Ltd.
McWhinney, S R; Tremblay, A; Boe, S G; Bardouille, T
2018-02-01
Neurofeedback training teaches individuals to modulate brain activity by providing real-time feedback and can be used for brain-computer interface control. The present study aimed to optimize training by maximizing engagement through goal-oriented task design. Participants were shown either a visual display or a robot, where each was manipulated using motor imagery (MI)-related electroencephalography signals. Those with the robot were instructed to quickly navigate grid spaces, as the potential for goal-oriented design to strengthen learning was central to our investigation. Both groups were hypothesized to show increased magnitude of these signals across 10 sessions, with the greatest gains being seen in those navigating the robot due to increased engagement. Participants demonstrated the predicted increase in magnitude, with no differentiation between hemispheres. Participants navigating the robot showed stronger left-hand MI increases than those with the computer display. This is likely due to success being reliant on maintaining strong MI-related signals. While older participants showed stronger signals in early sessions, this trend later reversed, suggesting greater natural proficiency but reduced flexibility. These results demonstrate capacity for modulating neurofeedback using MI over a series of training sessions, using tasks of varied design. Importantly, the more goal-oriented robot control task resulted in greater improvements.
Horton, Emily L; Renganathan, Ramkesh; Toth, Bryan N; Cohen, Alexa J; Bajcsy, Andrea V; Bateman, Amelia; Jennings, Mathew C; Khattar, Anish; Kuo, Ryan S; Lee, Felix A; Lim, Meilin K; Migasiuk, Laura W; Zhang, Amy; Zhao, Oliver K; Oliveira, Marcio A
2017-01-01
To lay the groundwork for devising, improving, and implementing new technologies to meet the needs of individuals with visual impairments, a systematic literature review was conducted to: a) describe hardware platforms used in assistive devices, b) identify their various applications, and c) summarize practices in user testing conducted with these devices. A search in relevant EBSCO databases for articles published between 1980 and 2014 with terminology related to visual impairment, technology, and tactile sensory adaptation yielded 62 articles that met the inclusion criteria for final review. It was found that while earlier hardware development focused on pin matrices, the emphasis then shifted toward force feedback haptics and accessible touch screens. The inclusion of interactive and multimodal features has become increasingly prevalent. The quantity and consistency of research on navigation, education, and computer accessibility suggest that these are pertinent areas of need for the visually impaired community. Methodologies for usability testing ranged from case studies to larger cross-sectional studies. Many studies used blindfolded sighted users to draw conclusions about design principles and usability. Altogether, the findings presented in this review provide insight on effective design strategies and user testing methodologies for future research on assistive technology for individuals with visual impairments.
NASA Astrophysics Data System (ADS)
Zheng, Li; Yi, Ruan
2009-11-01
Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.
Local navigation and fuzzy control realization for autonomous guided vehicle
NASA Astrophysics Data System (ADS)
El-Konyaly, El-Sayed H.; Saraya, Sabry F.; Shehata, Raef S.
1996-10-01
This paper addresses the problem of local navigation for an autonomous guided vehicle (AGV) in a structured environment that contains static and dynamic obstacles. Information about the environment is obtained via a CCD camera. The problem is formulated as a dynamic feedback control problem in which speed and steering decisions are made on the fly while the AGV is moving. A decision element (DE) that uses local information is proposed. The DE guides the vehicle in the environment by producing appropriate navigation decisions. Dynamic models of a three-wheeled vehicle for driving and steering mechanisms are derived. The interaction between them is performed via the local feedback DE. A controller, based on fuzzy logic, is designed to drive the vehicle safely in an intelligent and human-like manner. The effectiveness of the navigation and control strategies in driving the AGV is illustrated and evaluated.
Saris-Baglama, Renee N.; Smith, Kevin J.; DeRosa, Michael A.; Paulsen, Christine A.; Hogue, Sarah J.
2011-01-01
Abstract Objective The aim of this study was to evaluate usability of a prototype tablet PC-administered computerized adaptive test (CAT) of headache impact and patient feedback report, referred to as HEADACHE-CAT. Materials and Methods Heuristic evaluation specialists (n = 2) formed a consensus opinion on the application's strengths and areas for improvement based on general usability principles and human factors research. Usability testing involved structured interviews with headache sufferers (n = 9) to assess how they interacted with and navigated through the application, and to gather input on the survey and report interface, content, visual design, navigation, instructions, and user preferences. Results Specialists identified the need for improved instructions and text formatting, increased font size, page setup that avoids scrolling, and simplified presentation of feedback reports. Participants found the tool useful, and indicated a willingness to complete it again and recommend it to their healthcare provider. However, some had difficulty using the onscreen keyboard and autoadvance option; understanding the difference between generic and headache-specific questions; and interpreting score reports. Conclusions Heuristic evaluation and user testing can help identify usability problems in the early stages of application development, and improve the construct validity of electronic assessments such as the HEADACHE-CAT. An improved computerized HEADACHE-CAT measure can offer headache sufferers an efficient tool to increase patient self-awareness, monitor headaches over time, aid patient–provider communications, and improve quality of life. PMID:21214341
A navigation system for the visually impaired an intelligent white cane.
Fukasawa, A Jin; Magatani, Kazusihge
2012-01-01
In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. Our developed instrument consists of a navigation system and a map information system. These systems are installed on a white cane. Our navigation system can follow a colored navigation line that is set on the floor. In this system, a color sensor installed on the tip of a white cane, this sensor senses a color of navigation line and the system informs the visually impaired that he/she is walking along the navigation line by vibration. This color recognition system is controlled by a one-chip microprocessor. RFID tags and a receiver for these tags are used in the map information system. RFID tags are set on the colored navigation line. An antenna for RFID tags and a tag receiver are also installed on a white cane. The receiver receives the area information as a tag-number and notifies map information to the user by mp3 formatted pre-recorded voice. And now, we developed the direction identification technique. Using this technique, we can detect a user's walking direction. A triaxiality acceleration sensor is used in this system. Three normal subjects who were blindfolded with an eye mask were tested with our developed navigation system. All of them were able to walk along the navigation line perfectly. We think that the performance of the system is good. Therefore, our system will be extremely valuable in supporting the activities of the visually impaired.
Hartzler, A L; Patel, R A; Czerwinski, M; Pratt, W; Roseway, A; Chandrasekaran, N; Back, A
2014-01-01
This article is part of the focus theme of Methods of Information in Medicine on "Pervasive Intelligent Technologies for Health". Effective nonverbal communication between patients and clinicians fosters both the delivery of empathic patient-centered care and positive patient outcomes. Although nonverbal skill training is a recognized need, few efforts to enhance patient-clinician communication provide visual feedback on nonverbal aspects of the clinical encounter. We describe a novel approach that uses social signal processing technology (SSP) to capture nonverbal cues in real time and to display ambient visual feedback on control and affiliation--two primary, yet distinct dimensions of interpersonal nonverbal communication. To examine the design and clinician acceptance of ambient visual feedback on nonverbal communication, we 1) formulated a model of relational communication to ground SSP and 2) conducted a formative user study using mixed methods to explore the design of visual feedback. Based on a model of relational communication, we reviewed interpersonal communication research to map nonverbal cues to signals of affiliation and control evidenced in patient-clinician interaction. Corresponding with our formulation of this theoretical framework, we designed ambient real-time visualizations that reflect variations of affiliation and control. To explore clinicians' acceptance of this visual feedback, we conducted a lab study using the Wizard-of-Oz technique to simulate system use with 16 healthcare professionals. We followed up with seven of those participants through interviews to iterate on the design with a revised visualization that addressed emergent design considerations. Ambient visual feedback on non- verbal communication provides a theoretically grounded and acceptable way to provide clinicians with awareness of their nonverbal communication style. We provide implications for the design of such visual feedback that encourages empathic patient-centered communication and include considerations of metaphor, color, size, position, and timing of feedback. Ambient visual feedback from SSP holds promise as an acceptable means for facilitating empathic patient-centered nonverbal communication.
ERIC Educational Resources Information Center
Snyder, Gregory J.; Hough, Monica Strauss; Blanchet, Paul; Ivy, Lennette J.; Waddell, Dwight
2009-01-01
Purpose: Relatively recent research documents that visual choral speech, which represents an externally generated form of synchronous visual speech feedback, significantly enhanced fluency in those who stutter. As a consequence, it was hypothesized that self-generated synchronous and asynchronous visual speech feedback would likewise enhance…
Role of Visual Feedback Treatment for Defective /s/ Sounds in Patients with Cleft Palate.
ERIC Educational Resources Information Center
Michi, Ken-ichi; And Others
1993-01-01
Six patients with cleft palate were provided treatment using either visual feedback for tongue placement and frication or no visual feedback. Results indicated the feedback was especially useful in the treatment of defective /s/ sounds in the patients who exhibited abnormal posterior tongue posturing during dental or alveolar sounds. (Author/DB)
Stimulus-dependent modulation of visual neglect in a touch-screen cancellation task.
Keller, Ingo; Volkening, Katharina; Garbacenkaite, Ruta
2015-05-01
Patients with left-sided neglect frequently show omissions and repetitive behavior on cancellation tests. Using a touch-screen-based cancellation task, we tested how visual feedback and distracters influence the number of omissions and perseverations. Eighteen patients with left-sided visual neglect and 18 healthy controls performed four different cancellation tasks on an iPad touch screen: no feedback (the display did not change during the task), visual feedback (touched targets changed their color from black to green), visual feedback with distracters (20 distracters were evenly embedded in the display; detected targets changed their color from black to green), vanishing targets (touched targets disappeared from the screen). Except for the condition with vanishing targets, neglect patients had significantly more omissions and perseverations than healthy controls in the remaining three subtests. Both conditions providing feedback by changing the target color showed the highest number of omissions. Erasure of targets nearly diminished omissions completely. The highest rate of perseverations was observed in the no-feedback condition. The implementation of distracters led to a moderate number of perseverations. Visual feedback without distracters and vanishing targets abolished perseverations nearly completely. Visual feedback and the presence of distracters aggravated hemispatial neglect. This finding is compatible with impaired disengagement from the ipsilesional side as an important factor of visual neglect. Improvement of cancellation behavior with vanishing targets could have therapeutic implications. (c) 2015 APA, all rights reserved).
Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report
2011-01-01
Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees. PMID:21272334
[Impairment of safety in navigation caused by alcohol: impact on visual function].
Grütters, G; Reichelt, J A; Ritz-Timme, S; Thome, M; Kaatsch, H J
2003-05-01
So far in Germany, no legally binding standards for blood alcohol concentration exist that prove an impairment of navigability. The aim of our interdisciplinary project was to obtain data in order to identify critical blood alcohol limits. In this context the visual system seems to be of decisive importance. 21 professional skippers underwent realistic navigational demands soberly and alcoholized in a sea traffic simulator. The following parameters were considered: visual acuity, stereopsis, color vision, and accommodation. Under the influence of alcohol (average blood alcohol concentration: 1.08 per thousand ) each skipper considered himself to be completely capable of navigating. While simulations were running, all of the skippers made nautical mistakes or underestimated dangerous situations. Severe impairment in visual acuity or binocular function were not observed. Accommodation decreased by an average of 18% ( p=0.0001). In the test of color vision skippers made more mistakes ( p=0.017) and the time needed for this test was prolonged ( p=0.004). Changes in visual function as well as vegetative and psychological reactions could be the cause of mistakes and alcohol should therefore be regarded as a severe risk factor for security in sea navigation.
Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments
ERIC Educational Resources Information Center
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…
Developmental remodeling of corticocortical feedback circuits in ferret visual cortex
Khalil, Reem; Levitt, Jonathan B.
2014-01-01
Visual cortical areas in the mammalian brain are linked through a system of interareal feedforward and feedback connections, which presumably underlie different visual functions. We characterized the refinement of feedback projections to primary visual cortex (V1) from multiple sources in juvenile ferrets ranging in age from four to ten weeks postnatal. We studied whether the refinement of different aspects of feedback circuitry from multiple visual cortical areas proceeds at a similar rate in all areas. We injected the neuronal tracer cholera toxin B (CTb) into V1, and mapped the areal and laminar distribution of retrogradely labeled cells in extrastriate cortex. Around the time of eye opening at four weeks postnatal, the retinotopic arrangement of feedback appears essentially adultlike; however, Suprasylvian cortex supplies the greatest proportion of feedback, whereas area 18 supplies the greatest proportion in the adult. The density of feedback cells and the ratio of supragranular/infragranular feedback contribution declined in this period at a similar rate in all cortical areas. We also find significant feedback to V1 from layer IV of all extrastriate areas. The regularity of cell spacing, the proportion of feedback arising from layer IV, and the tangential extent of feedback in each area all remained essentially unchanged during this period, except for the infragranular feedback source in area 18 which expanded. Thus, while much of the basic pattern of cortical feedback to V1 is present before eye opening, there is major synchronous reorganization after eye opening, suggesting a crucial role for visual experience in this remodeling process. PMID:24665018
Developmental remodeling of corticocortical feedback circuits in ferret visual cortex.
Khalil, Reem; Levitt, Jonathan B
2014-10-01
Visual cortical areas in the mammalian brain are linked through a system of interareal feedforward and feedback connections, which presumably underlie different visual functions. We characterized the refinement of feedback projections to primary visual cortex (V1) from multiple sources in juvenile ferrets ranging in age from 4-10 weeks postnatal. We studied whether the refinement of different aspects of feedback circuitry from multiple visual cortical areas proceeds at a similar rate in all areas. We injected the neuronal tracer cholera toxin B (CTb) into V1 and mapped the areal and laminar distribution of retrogradely labeled cells in extrastriate cortex. Around the time of eye opening at 4 weeks postnatal, the retinotopic arrangement of feedback appears essentially adult-like; however, suprasylvian cortex supplies the greatest proportion of feedback, whereas area 18 supplies the greatest proportion in the adult. The density of feedback cells and the ratio of supragranular/infragranular feedback contribution declined in this period at a similar rate in all cortical areas. We also found significant feedback to V1 from layer IV of all extrastriate areas. The regularity of cell spacing, the proportion of feedback arising from layer IV, and the tangential extent of feedback in each area all remained essentially unchanged during this period, except for the infragranular feedback source in area 18, which expanded. Thus, while much of the basic pattern of cortical feedback to V1 is present before eye opening, there is major synchronous reorganization after eye opening, suggesting a crucial role for visual experience in this remodeling process. © 2014 Wiley Periodicals, Inc.
Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul
2016-02-01
The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.
Variable force and visual feedback effects on teleoperator man/machine performance
NASA Technical Reports Server (NTRS)
Massimino, Michael J.; Sheridan, Thomas B.
1989-01-01
An experimental study was conducted to determine the effects of various forms of visual and force feedback on human performance for several telemanipulation tasks. Experiments were conducted with varying frame rates and subtended visual angles, with and without force feedback.
Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model
Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar
2017-01-01
Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation. PMID:28248996
Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model.
Li, Min; Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar
2017-01-01
Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation.
How to find home backwards? Navigation during rearward homing of Cataglyphis fortis desert ants.
Pfeffer, Sarah E; Wittlinger, Matthias
2016-07-15
Cataglyphis ants are renowned for their impressive navigation skills, which have been studied in numerous experiments during forward locomotion. However, the ants' navigational performance during backward homing when dragging large food loads has not been investigated until now. During backward locomotion, the odometer has to deal with unsteady motion and irregularities in inter-leg coordination. The legs' sensory feedback during backward walking is not just a simple reversal of the forward stepping movements: compared with forward homing, ants are facing towards the opposite direction during backward dragging. Hence, the compass system has to cope with a flipped celestial view (in terms of the polarization pattern and the position of the sun) and an inverted retinotopic image of the visual panorama and landmark environment. The same is true for wind and olfactory cues. In this study we analyze for the first time backward-homing ants and evaluate their navigational performance in channel and open field experiments. Backward-homing Cataglyphis fortis desert ants show remarkable similarities in the performance of homing compared with forward-walking ants. Despite the numerous challenges emerging for the navigational system during backward walking, we show that ants perform quite well in our experiments. Direction and distance gauging was comparable to that of the forward-walking control groups. Interestingly, we found that backward-homing ants often put down the food item and performed foodless search loops around the left food item. These search loops were mainly centred around the drop-off position (and not around the nest position), and increased in length the closer the ants came to their fictive nest site. © 2016. Published by The Company of Biologists Ltd.
Rougier, Patrice R; Boudrahem, Samir
2017-09-01
The technique of additional visual feedback has been shown to significantly decrease the center of pressure (CP) displacements of a standing subject. Body-weight asymmetry is known to increase postural instability due to difficulties in coordinating the reaction forces exerted under each foot and is often a cardinal feature of various neurological and traumatic diseases. To examine the possible interactions between additional visual feedback and body-weight asymmetry effects, healthy adults were recruited in a protocol with and without additional visual feedback, with different levels of body-weight asymmetry. CP displacements under each foot were recorded and used to compute the resultant CP displacements (CP Res ) and to estimate vertically projected center of gravity (CG v ) and CP Res -CG v displacements. Overall, six conditions were randomly proposed combining two factors: asymmetry with three BW percentage distributions (50/50, 35/65 and 20/80; left/right leg) and feedback (with or without additional VFB). The additional visual feedback technique principally reduces CG v displacements, whereas asymmetry increases CP Res -CG v displacements along the mediolateral axis. Some effects on plantar CP displacements were also observed, but only under the unloaded foot. Interestingly, no interaction between additional visual feedback and body-weight asymmetry was reported. These results suggest that the various postural effects that ensue from manipulating additional visual feedback parameters, shown previously in healthy subjects in various studies, could also apply independently of the level of asymmetry. Visual feedback effects could be observed in patients presenting weight-bearing asymmetries. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
The role of visual and direct force feedback in robotics-assisted mitral valve annuloplasty.
Currie, Maria E; Talasaz, Ali; Rayman, Reiza; Chu, Michael W A; Kiaii, Bob; Peters, Terry; Trejos, Ana Luisa; Patel, Rajni
2017-09-01
The objective of this work was to determine the effect of both direct force feedback and visual force feedback on the amount of force applied to mitral valve tissue during ex vivo robotics-assisted mitral valve annuloplasty. A force feedback-enabled master-slave surgical system was developed to provide both visual and direct force feedback during robotics-assisted cardiac surgery. This system measured the amount of force applied by novice and expert surgeons to cardiac tissue during ex vivo mitral valve annuloplasty repair. The addition of visual (2.16 ± 1.67), direct (1.62 ± 0.86), or both visual and direct force feedback (2.15 ± 1.08) resulted in lower mean maximum force applied to mitral valve tissue while suturing compared with no force feedback (3.34 ± 1.93 N; P < 0.05). To achieve better control of interaction forces on cardiac tissue during robotics-assisted mitral valve annuloplasty suturing, force feedback may be required. Copyright © 2016 John Wiley & Sons, Ltd.
Baumann, Oliver; Skilleter, Ashley J.; Mattingley, Jason B.
2011-01-01
The goal of the present study was to examine the extent to which working memory supports the maintenance of object locations during active spatial navigation. Participants were required to navigate a virtual environment and to encode the location of a target object. In the subsequent maintenance period they performed one of three secondary tasks that were designed to selectively load visual, verbal or spatial working memory subsystems. Thereafter participants re-entered the environment and navigated back to the remembered location of the target. We found that while navigation performance in participants with high navigational ability was impaired only by the spatial secondary task, navigation performance in participants with poor navigational ability was impaired equally by spatial and verbal secondary tasks. The visual secondary task had no effect on navigation performance. Our results extend current knowledge by showing that the differential engagement of working memory subsystems is determined by navigational ability. PMID:21629686
Computer-assisted navigation in orthopedic surgery.
Mavrogenis, Andreas F; Savvidou, Olga D; Mimidis, George; Papanastasiou, John; Koulalis, Dimitrios; Demertzis, Nikolaos; Papagelopoulos, Panayiotis J
2013-08-01
Computer-assisted navigation has a role in some orthopedic procedures. It allows the surgeons to obtain real-time feedback and offers the potential to decrease intra-operative errors and optimize the surgical result. Computer-assisted navigation systems can be active or passive. Active navigation systems can either perform surgical tasks or prohibit the surgeon from moving past a predefined zone. Passive navigation systems provide intraoperative information, which is displayed on a monitor, but the surgeon is free to make any decisions he or she deems necessary. This article reviews the available types of computer-assisted navigation, summarizes the clinical applications and reviews the results of related series using navigation, and informs surgeons of the disadvantages and pitfalls of computer-assisted navigation in orthopedic surgery. Copyright 2013, SLACK Incorporated.
A biomimetic vision-based hovercraft accounts for bees' complex behaviour in various corridors.
Roubieu, Frédéric L; Serres, Julien R; Colonnier, Fabien; Franceschini, Nicolas; Viollet, Stéphane; Ruffier, Franck
2014-09-01
Here we present the first systematic comparison between the visual guidance behaviour of a biomimetic robot and those of honeybees flying in similar environments. We built a miniature hovercraft which can travel safely along corridors with various configurations. For the first time, we implemented on a real physical robot the 'lateral optic flow regulation autopilot', which we previously studied computer simulations. This autopilot inspired by the results of experiments on various species of hymenoptera consists of two intertwined feedback loops, the speed and lateral control loops, each of which has its own optic flow (OF) set-point. A heading-lock system makes the robot move straight ahead as fast as 69 cm s(-1) with a clearance from one wall as small as 31 cm, giving an unusually high translational OF value (125° s(-1)). Our biomimetic robot was found to navigate safely along straight, tapered and bent corridors, and to react appropriately to perturbations such as the lack of texture on one wall, the presence of a tapering or non-stationary section of the corridor and even a sloping terrain equivalent to a wind disturbance. The front end of the visual system consists of only two local motion sensors (LMS), one on each side. This minimalistic visual system measuring the lateral OF suffices to control both the robot's forward speed and its clearance from the walls without ever measuring any speeds or distances. We added two additional LMSs oriented at +/-45° to improve the robot's performances in stiffly tapered corridors. The simple control system accounts for worker bees' ability to navigate safely in six challenging environments: straight corridors, single walls, tapered corridors, straight corridors with part of one wall moving or missing, as well as in the presence of wind.
The Influence of Restricted Visual Feedback on Dribbling Performance in Youth Soccer Players.
Fransen, Job; Lovell, Thomas W J; Bennett, Kyle J M; Deprez, Dieter; Deconinck, Frederik J A; Lenoir, Matthieu; Coutts, Aaron J
2017-04-01
The aim of the current study was to examine the influence of restricted visual feedback using stroboscopic eyewear on the dribbling performance of youth soccer players. Three dribble test conditions were used in a within-subjects design to measure the effect of restricted visual feedback on soccer dribbling performance in 189 youth soccer players (age: 10-18 y) classified as fast, average or slow dribblers. The results showed that limiting visual feedback increased dribble test times across all abilities. Furthermore, the largest performance decrement between stroboscopic and full vision conditions was in fast dribblers, showing that fast dribblers were most affected by reduced visual information. This may be due to a greater dependency on visual feedback at increased speeds, which may limit the ability to maintain continuous control of the ball. These findings may have important implications for the development of soccer dribbling ability.
2018-02-12
usability preference. Results under the second focus showed that the frequency with which participants expected status updates differed depending upon the...assistance requests for both navigational route and building selection depending on the type of exogenous visual cues displayed? 3) Is there a difference...in response time to visual reports for both navigational route and building selection depending on the type of exogenous visual cues displayed? 4
Visual and somatic sensory feedback of brain activity for intuitive surgical robot manipulation.
Miura, Satoshi; Matsumoto, Yuya; Kobayashi, Yo; Kawamura, Kazuya; Nakashima, Yasutaka; Fujie, Masakatsu G
2015-01-01
This paper presents a method to evaluate the hand-eye coordination of the master-slave surgical robot by measuring the activation of the intraparietal sulcus in users brain activity during controlling virtual manipulation. The objective is to examine the changes in activity of the intraparietal sulcus when the user's visual or somatic feedback is passed through or intercepted. The hypothesis is that the intraparietal sulcus activates significantly when both the visual and somatic sense pass feedback, but deactivates when either visual or somatic is intercepted. The brain activity of three subjects was measured by the functional near-infrared spectroscopic-topography brain imaging while they used a hand controller to move a virtual arm of a surgical simulator. The experiment was performed several times with three conditions: (i) the user controlled the virtual arm naturally under both visual and somatic feedback passed, (ii) the user moved with closed eyes under only somatic feedback passed, (iii) the user only gazed at the screen under only visual feedback passed. Brain activity showed significantly better control of the virtual arm naturally (p<;0.05) when compared with moving with closed eyes or only gazing among all participants. In conclusion, the brain can activate according to visual and somatic sensory feedback agreement.
Computation and visualization of uncertainty in surgical navigation.
Simpson, Amber L; Ma, Burton; Vasarhelyi, Edward M; Borschneck, Dan P; Ellis, Randy E; James Stewart, A
2014-09-01
Surgical displays do not show uncertainty information with respect to the position and orientation of instruments. Data is presented as though it were perfect; surgeons unaware of this uncertainty could make critical navigational mistakes. The propagation of uncertainty to the tip of a surgical instrument is described and a novel uncertainty visualization method is proposed. An extensive study with surgeons has examined the effect of uncertainty visualization on surgical performance with pedicle screw insertion, a procedure highly sensitive to uncertain data. It is shown that surgical performance (time to insert screw, degree of breach of pedicle, and rotation error) is not impeded by the additional cognitive burden imposed by uncertainty visualization. Uncertainty can be computed in real time and visualized without adversely affecting surgical performance, and the best method of uncertainty visualization may depend upon the type of navigation display. Copyright © 2013 John Wiley & Sons, Ltd.
Lifting business process diagrams to 2.5 dimensions
NASA Astrophysics Data System (ADS)
Effinger, Philip; Spielmann, Johannes
2010-01-01
In this work, we describe our visualization approach for business processes using 2.5 dimensional techniques (2.5D). The idea of 2.5D is to add the concept of layering to a two dimensional (2D) visualization. The layers are arranged in a three-dimensional display space. For the modeling of the business processes, we use the Business Process Modeling Notation (BPMN). The benefit of connecting BPMN with a 2.5D visualization is not only to obtain a more abstract view on the business process models but also to develop layering criteria that eventually increase readability of the BPMN model compared to 2D. We present a 2.5D Navigator for BPMN models that offers different perspectives for visualization. Therefore we also develop BPMN specific perspectives. The 2.5D Navigator combines the 2.5D approach with perspectives and allows free navigation in the three dimensional display space. We also demonstrate our tool and libraries used for implementation of the visualizations. The underlying general framework for 2.5D visualizations is explored and presented in a fashion that it can easily be used for different applications. Finally, an evaluation of our navigation tool demonstrates that we can achieve satisfying and aesthetic displays of diagrams stating BPMN models in 2.5D-visualizations.
The effects of link format and screen location on visual search of web pages.
Ling, Jonathan; Van Schaik, Paul
2004-06-22
Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.
Image processing and applications based on visualizing navigation service
NASA Astrophysics Data System (ADS)
Hwang, Chyi-Wen
2015-07-01
When facing the "overabundant" of semantic web information, in this paper, the researcher proposes the hierarchical classification and visualizing RIA (Rich Internet Application) navigation system: Concept Map (CM) + Semantic Structure (SS) + the Knowledge on Demand (KOD) service. The aim of the Multimedia processing and empirical applications testing, was to investigating the utility and usability of this visualizing navigation strategy in web communication design, into whether it enables the user to retrieve and construct their personal knowledge or not. Furthermore, based on the segment markets theory in the Marketing model, to propose a User Interface (UI) classification strategy and formulate a set of hypermedia design principles for further UI strategy and e-learning resources in semantic web communication. These research findings: (1) Irrespective of whether the simple declarative knowledge or the complex declarative knowledge model is used, the "CM + SS + KOD navigation system" has a better cognition effect than the "Non CM + SS + KOD navigation system". However, for the" No web design experience user", the navigation system does not have an obvious cognition effect. (2) The essential of classification in semantic web communication design: Different groups of user have a diversity of preference needs and different cognitive styles in the CM + SS + KOD navigation system.
Teulings, H; Contreras-Vidal, J; Stelmach, G; Adler, C
2002-01-01
Objective: The ability to use visual feedback to control handwriting size was compared in patients with Parkinson's disease (PD), elderly people, and young adults to better understand factors playing a part in parkinsonian micrographia. Methods: The participants wrote sequences of eight cursive l loops with visual target sizes of 0.5 and 2 cm on a flat panel display digitiser which both recorded and displayed the pen movements. In the pre-exposure and postexposure conditions, the display digitiser showed the actual pen trace in real time and real size. In the distortion exposure conditions, the gain of the vertical dimension of the visual feedback was either reduced to 70% or enlarged to 140%. Results: The young controls showed a gradual visuomotor adaptation that compensated for the visual feedback distortions during the exposure conditions. They also showed significant after effects during the postexposure conditions. The elderly controls marginally corrected for the size distortions and showed small after effects. The patients with PD, however, showed no trial by trial adaptations or after effects but instead, a progressive amplification of the distortion effect in each individual trial. Conclusion: The young controls used visual feedback to update their visuomotor map. The elderly controls seemed to make little use of visual feedback. The patients with Parkinson's disease rely on the visual feedback of previous or of ongoing strokes to programme subsequent strokes. This recursive feedback may play a part in the progressive reductions in handwriting size found in parkinsonian micrographia. PMID:11861687
Yang, Yea-Ru; Chen, Yi-Hua; Chang, Heng-Chih; Chan, Rai-Chi; Wei, Shun-Hwa; Wang, Ray-Yau
2015-10-01
We investigated the effects of a computer-generated interactive visual feedback training program on the recovery from pusher syndrome in stroke patients. Assessor-blinded, pilot randomized controlled study. A total of 12 stroke patients with pusher syndrome were randomly assigned to either the experimental group (N = 7, computer-generated interactive visual feedback training) or control group (N = 5, mirror visual feedback training). The scale for contraversive pushing for severity of pusher syndrome, the Berg Balance Scale for balance performance, and the Fugl-Meyer assessment scale for motor control were the outcome measures. Patients were assessed pre- and posttraining. A comparison of pre- and posttraining assessment results revealed that both training programs led to the following significant changes: decreased severity of pusher syndrome scores (decreases of 4.0 ± 1.1 and 1.4 ± 1.0 in the experimental and control groups, respectively); improved balance scores (increases of 14.7 ± 4.3 and 7.2 ± 1.6 in the experimental and control groups, respectively); and higher scores for lower extremity motor control (increases of 8.4 ± 2.2 and 5.6 ± 3.3 in the experimental and control groups, respectively). Furthermore, the computer-generated interactive visual feedback training program produced significantly better outcomes in the improvement of pusher syndrome (p < 0.01) and balance (p < 0.05) compared with the mirror visual feedback training program. Although both training programs were beneficial, the computer-generated interactive visual feedback training program more effectively aided recovery from pusher syndrome compared with mirror visual feedback training. © The Author(s) 2014.
ERIC Educational Resources Information Center
Hew, Soon-Hin; Ohki, Mitsuru
2004-01-01
This study examines the effectiveness of imagery and electronic visual feedback in facilitating students' acquisition of Japanese pronunciation skills. The independent variables, animated graphic annotation (AGA) and immediate visual feedback (IVF) were integrated into a Japanese computer-assisted language learning (JCALL) program focused on the…
Chow, John W; Stokic, Dobrivoje S
2018-03-01
We examined changes in variability, accuracy, frequency composition, and temporal regularity of force signal from vision-guided to memory-guided force-matching tasks in 17 subacute stroke and 17 age-matched healthy subjects. Subjects performed a unilateral isometric knee extension at 10, 30, and 50% of peak torque [maximum voluntary contraction (MVC)] for 10 s (3 trials each). Visual feedback was removed at the 5-s mark in the first two trials (feedback withdrawal), and 30 s after the second trial the subjects were asked to produce the target force without visual feedback (force recall). The coefficient of variation and constant error were used to quantify force variability and accuracy. Force structure was assessed by the median frequency, relative spectral power in the 0-3-Hz band, and sample entropy of the force signal. At 10% MVC, the force signal in subacute stroke subjects became steadier, more broadband, and temporally more irregular after the withdrawal of visual feedback, with progressively larger error at higher contraction levels. Also, the lack of modulation in the spectral frequency at higher force levels with visual feedback persisted in both the withdrawal and recall conditions. In terms of changes from the visual feedback condition, the feedback withdrawal produced a greater difference between the paretic, nonparetic, and control legs than the force recall. The overall results suggest improvements in force variability and structure from vision- to memory-guided force control in subacute stroke despite decreased accuracy. Different sensory-motor memory retrieval mechanisms seem to be involved in the feedback withdrawal and force recall conditions, which deserves further study. NEW & NOTEWORTHY We demonstrate that in the subacute phase of stroke, force signals during a low-level isometric knee extension become steadier, more broadband in spectral power, and more complex after removal of visual feedback. Larger force errors are produced when recalling target forces than immediately after withdrawing visual feedback. Although visual feedback offers better accuracy, it worsens force variability and structure in subacute stroke. The feedback withdrawal and force recall conditions seem to involve different memory retrieval mechanisms.
Moving in Dim Light: Behavioral and Visual Adaptations in Nocturnal Ants.
Narendra, Ajay; Kamhi, J Frances; Ogawa, Yuri
2017-11-01
Visual navigation is a benchmark information processing task that can be used to identify the consequence of being active in dim-light environments. Visual navigational information that animals use during the day includes celestial cues such as the sun or the pattern of polarized skylight and terrestrial cues such as the entire panorama, canopy pattern, or significant salient features in the landscape. At night, some of these navigational cues are either unavailable or are significantly dimmer or less conspicuous than during the day. Even under these circumstances, animals navigate between locations of importance. Ants are a tractable system for studying navigation during day and night because the fine scale movement of individual animals can be recorded in high spatial and temporal detail. Ant species range from being strictly diurnal, crepuscular, and nocturnal. In addition, a number of species have the ability to change from a day- to a night-active lifestyle owing to environmental demands. Ants also offer an opportunity to identify the evolution of sensory structures for discrete temporal niches not only between species but also within a single species. Their unique caste system with an exclusive pedestrian mode of locomotion in workers and an exclusive life on the wing in males allows us to disentangle sensory adaptations that cater for different lifestyles. In this article, we review the visual navigational abilities of nocturnal ants and identify the optical and physiological adaptations they have evolved for being efficient visual navigators in dim-light. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Navigation ability dependent neural activation in the human brain: an fMRI study.
Ohnishi, Takashi; Matsuda, Hiroshi; Hirakata, Makiko; Ugawa, Yoshikazu
2006-08-01
Visual-spatial navigation in familiar and unfamiliar environments is an essential requirement of daily life. Animal studies indicated the importance of the hippocampus for navigation. Neuroimaging studies demonstrated gender difference or strategies dependent difference of neural substrates for navigation. Using functional magnetic resonance imaging, we measured brain activity related to navigation in four groups of normal volunteers: good navigators (males and females) and poor navigators (males and females). In a whole group analysis, task related activity was noted in the hippocampus, parahippocampal gyrus, posterior cingulate cortex, precuneus, parietal association areas, and the visual association areas. In group comparisons, good navigators showed a stronger activation in the medial temporal area and precuneus than poor navigators. There was neither sex effect nor interaction effect between sex and navigation ability. The activity in the left medial temporal areas was positively correlated with task performance, whereas activity in the right parietal area was negatively correlated with task performance. Furthermore, the activity in the bilateral medial temporal areas was positively correlated with scores reflecting preferred navigation strategies, whereas activity in the bilateral superior parietal lobules was negatively correlated with them. Our data suggest that different brain activities related to navigation should reflect navigation skill and strategies.
Sigrist, Roland; Rauter, Georg; Marchal-Crespo, Laura; Riener, Robert; Wolf, Peter
2015-03-01
Concurrent augmented feedback has been shown to be less effective for learning simple motor tasks than for complex tasks. However, as mostly artificial tasks have been investigated, transfer of results to tasks in sports and rehabilitation remains unknown. Therefore, in this study, the effect of different concurrent feedback was evaluated in trunk-arm rowing. It was then investigated whether multimodal audiovisual and visuohaptic feedback are more effective for learning than visual feedback only. Naïve subjects (N = 24) trained in three groups on a highly realistic virtual reality-based rowing simulator. In the visual feedback group, the subject's oar was superimposed to the target oar, which continuously became more transparent when the deviation between the oars decreased. Moreover, a trace of the subject's trajectory emerged if deviations exceeded a threshold. The audiovisual feedback group trained with oar movement sonification in addition to visual feedback to facilitate learning of the velocity profile. In the visuohaptic group, the oar movement was inhibited by path deviation-dependent braking forces to enhance learning of spatial aspects. All groups significantly decreased the spatial error (tendency in visual group) and velocity error from baseline to the retention tests. Audiovisual feedback fostered learning of the velocity profile significantly more than visuohaptic feedback. The study revealed that well-designed concurrent feedback fosters complex task learning, especially if the advantages of different modalities are exploited. Further studies should analyze the impact of within-feedback design parameters and the transferability of the results to other tasks in sports and rehabilitation.
An evaluation of unisensory and multisensory adaptive flight-path navigation displays
NASA Astrophysics Data System (ADS)
Moroney, Brian W.
1999-11-01
The present study assessed the use of unimodal (auditory or visual) and multimodal (audio-visual) adaptive interfaces to aid military pilots in the performance of a precision-navigation flight task when they were confronted with additional information-processing loads. A standard navigation interface was supplemented by adaptive interfaces consisting of either a head-up display based flight director, a 3D virtual audio interface, or a combination of the two. The adaptive interfaces provided information about how to return to the pathway when off course. Using an advanced flight simulator, pilots attempted two navigation scenarios: (A) maintain proper course under normal flight conditions and (B) return to course after their aircraft's position has been perturbed. Pilots flew in the presence or absence of an additional information-processing task presented in either the visual or auditory modality. The additional information-processing tasks were equated in terms of perceived mental workload as indexed by the NASA-TLX. Twelve experienced military pilots (11 men and 1 woman), naive to the purpose of the experiment, participated in the study. They were recruited from Wright-Patterson Air Force Base and had a mean of 2812 hrs. of flight experience. Four navigational interface configurations, the standard visual navigation interface alone (SV), SV plus adaptive visual, SV plus adaptive auditory, and SV plus adaptive visual-auditory composite were combined factorially with three concurrent tasks (CT), the no CT, the visual CT, and the auditory CT, a completely repeated measures design. The adaptive navigation displays were activated whenever the aircraft was more than 450 ft off course. In the normal flight scenario, the adaptive interfaces did not bolster navigation performance in comparison to the standard interface. It is conceivable that the pilots performed quite adequately using the familiar generic interface under normal flight conditions and hence showed no added benefit of the adaptive interfaces. In the return-to-course scenario, the relative advantages of the three adaptive interfaces were dependent upon the nature of the CT in a complex way. In the absence of a CT, recovery heading performance was superior with the adaptive visual and adaptive composite interfaces compared to the adaptive auditory interface. In the context of a visual CT, recovery when using the adaptive composite interface was superior to that when using the adaptive visual interface. Post-experimental inquiry indicated that when faced with a visual CT, the pilots used the auditory component of the multimodal guidance display to detect gross heading errors and the visual component to make more fine-grained heading adjustments. In the context of the auditory CT, navigation performance using the adaptive visual interface tended to be superior to that when using the adaptive auditory interface. Neither CT performance nor NASA-TLX workload level was influenced differentially by the interface configurations. Thus, the potential benefits associated with the proposed interfaces appear to be unaccompanied by negative side effects involving CT interference and workload. The adaptive interface configurations were altered without any direct input from the pilot. Thus, it was feared that pilots might reject the activation of interfaces independent of their control. However, pilots' debriefing comments about the efficacy of the adaptive interface approach were very positive. (Abstract shortened by UMI.)
Sarlegna, Fabrice R; Baud-Bovy, Gabriel; Danion, Frédéric
2010-08-01
When we manipulate an object, grip force is adjusted in anticipation of the mechanical consequences of hand motion (i.e., load force) to prevent the object from slipping. This predictive behavior is assumed to rely on an internal representation of the object dynamic properties, which would be elaborated via visual information before the object is grasped and via somatosensory feedback once the object is grasped. Here we examined this view by investigating the effect of delayed visual feedback during dextrous object manipulation. Adult participants manually tracked a sinusoidal target by oscillating a handheld object whose current position was displayed as a cursor on a screen along with the visual target. A delay was introduced between actual object displacement and cursor motion. This delay was linearly increased (from 0 to 300 ms) and decreased within 2-min trials. As previously reported, delayed visual feedback altered performance in manual tracking. Importantly, although the physical properties of the object remained unchanged, delayed visual feedback altered the timing of grip force relative to load force by about 50 ms. Additional experiments showed that this effect was not due to task complexity nor to manual tracking. A model inspired by the behavior of mass-spring systems suggests that delayed visual feedback may have biased the representation of object dynamics. Overall, our findings support the idea that visual feedback of object motion can influence the predictive control of grip force even when the object is grasped.
Indoor Navigation by People with Visual Impairment Using a Digital Sign System
Legge, Gordon E.; Beckmann, Paul J.; Tjan, Bosco S.; Havey, Gary; Kramer, Kevin; Rolkosky, David; Gage, Rachel; Chen, Muzi; Puchakayala, Sravan; Rangarajan, Aravindhan
2013-01-01
There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects—blind, low vision, blindfolded sighted, and normally sighted controls—were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment. PMID:24116156
Assistive obstacle detection and navigation devices for vision-impaired users.
Ong, S K; Zhang, J; Nee, A Y C
2013-09-01
Quality of life for the visually impaired is an urgent worldwide issue that needs to be addressed. Obstacle detection is one of the most important navigation tasks for the visually impaired. In this research, a novel range sensor placement scheme is proposed in this paper for the development of obstacle detection devices. Based on this scheme, two prototypes have been developed targeting at different user groups. This paper discusses the design issues, functional modules and the evaluation tests carried out for both prototypes. Implications for Rehabilitation Visual impairment problem is becoming more severe due to the worldwide ageing population. Individuals with visual impairment require assistance from assistive devices in daily navigation tasks. Traditional assistive devices that assist navigation may have certain drawbacks, such as the limited sensing range of a white cane. Obstacle detection devices applying the range sensor technology can identify road conditions with a higher sensing range to notify the users of potential dangers in advance.
A software module for implementing auditory and visual feedback on a video-based eye tracking system
NASA Astrophysics Data System (ADS)
Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.
2016-05-01
We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.
Development of voice navigation system for the visually impaired by using IC tags.
Takatori, Norihiko; Nojima, Kengo; Matsumoto, Masashi; Yanashima, Kenji; Magatani, Kazushige
2006-01-01
There are about 300,000 visually impaired persons in Japan. Most of them are old persons and, cannot become skillful in using a white cane, even if they make effort to learn how to use a white cane. Therefore, some guiding system that supports the independent activities of the visually impaired are required. In this paper, we will describe about a developed white cane system that supports the independent walking of the visually impaired in the indoor space. This system is composed of colored navigation lines that include IC tags and an intelligent white cane that has a navigation computer. In our system colored navigation lines that are put on the floor of the target space from the start point to the destination and IC tags that are set at the landmark point are used for indication of the route to the destination. The white cane has a color sensor, an IC tag transceiver and a computer system that includes a voice processor. This white cane senses the navigation line that has target color by a color sensor. When a color sensor finds the target color, the white cane informs a white cane user that he/she is on the navigation line by vibration. So, only following this vibration, the user can reach the destination. However, at some landmark points, guidance is necessary. At these points, an IC tag is set under the navigation line. The cane makes communication with the tag and informs the user about the land mark pint by pre recorded voice. Ten normal subjects who were blindfolded were tested with our developed system. All of them could walk along navigation line. And the IC tag information system worked well. Therefore, we have concluded that our system will be a very valuable one to support activities of the visually impaired.
Visual orientation and navigation in nocturnal arthropods.
Warrant, Eric; Dacke, Marie
2010-01-01
With their highly sensitive visual systems, the arthropods have evolved a remarkable capacity to orient and navigate at night. Whereas some navigate under the open sky, and take full advantage of the celestial cues available there, others navigate in more difficult conditions, such as through the dense understory of a tropical rainforest. Four major classes of orientation are performed by arthropods at night, some of which involve true navigation (i.e. travel to a distant goal that lies beyond the range of direct sensory contact): (1) simple straight-line orientation, typically for escape purposes; (2) nightly short-distance movements relative to a shoreline, typically in the context of feeding; (3) long-distance nocturnal migration at high altitude in the quest to locate favorable feeding or breeding sites, and (4) nocturnal excursions to and from a fixed nest or food site (i.e. homing), a task that in most species involves path integration and/or the learning and recollection of visual landmarks. These four classes of orientation--and their visual basis--are reviewed here, with special emphasis given to the best-understood animal systems that are representative of each. 2010 S. Karger AG, Basel.
Sun, Xinlu; Chong, Heap-Yih; Liao, Pin-Chao
2018-06-25
Navigated inspection seeks to improve hazard identification (HI) accuracy. With tight inspection schedule, HI also requires efficiency. However, lacking quantification of HI efficiency, navigated inspection strategies cannot be comprehensively assessed. This work aims to determine inspection efficiency in navigated safety inspection, controlling for the HI accuracy. Based on a cognitive method of the random search model (RSM), an experiment was conducted to observe the HI efficiency in navigation, for a variety of visual clutter (VC) scenarios, while using eye-tracking devices to record the search process and analyze the search performance. The results show that the RSM is an appropriate instrument, and VC serves as a hazard classifier for navigation inspection in improving inspection efficiency. This suggests a new and effective solution for addressing the low accuracy and efficiency of manual inspection through navigated inspection involving VC and the RSM. It also provides insights into the inspectors' safety inspection ability.
33 CFR 175.135 - Existing equipment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...
33 CFR 175.135 - Existing equipment.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...
33 CFR 175.135 - Existing equipment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...
33 CFR 175.135 - Existing equipment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...
33 CFR 175.135 - Existing equipment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Existing equipment. 175.135 Section 175.135 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.135 Existing equipment. Launchers...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Effects of Real-Time Visual Feedback on Pre-Service Teachers' Singing
ERIC Educational Resources Information Center
Leong, S.; Cheng, L.
2014-01-01
This pilot study focuses on the use real-time visual feedback technology (VFT) in vocal training. The empirical research has two aims: to ascertain the effectiveness of the real-time visual feedback software "Sing & See" in the vocal training of pre-service music teachers and the teachers' perspective on their experience with…
Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.
Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta
2014-07-01
We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.
Use of visual CO2 feedback as a retrofit solution for improving classroom air quality.
Wargocki, P; Da Silva, N A F
2015-02-01
Carbon dioxide (CO2 ) sensors that provide a visual indication were installed in classrooms during normal school operation. During 2-week periods, teachers and students were instructed to open the windows in response to the visual CO2 feedback in 1 week and open them, as they would normally do, without visual feedback, in the other week. In the heating season, two pairs of classrooms were monitored, one pair naturally and the other pair mechanically ventilated. In the cooling season, two pairs of naturally ventilated classrooms were monitored, one pair with split cooling in operation and the other pair with no cooling. Classrooms were matched by grade. Providing visual CO2 feedback reduced CO2 levels, as more windows were opened in this condition. This increased energy use for heating and reduced the cooling requirement in summertime. Split cooling reduced the frequency of window opening only when no visual CO2 feedback was present. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Kavadella, A; Kossioni, A E; Tsiklakis, K; Cowpe, J; Bullock, A; Barnes, E; Bailey, S; Thomas, H; Thomas, R; Karaharju-Suvanto, T; Suomalainen, K; Kersten, H; Povel, E; Giles, M; Walmsley, D; Soboleva, U; Liepa, A; Akota, I
2013-05-01
To provide evidence-based and peer-reviewed recommendations for the development of dental continuing professional development (CPD) learning e-modules. The present recommendations are consensus recommendations of the DentCPD project team and were informed by a literature research, consultations from e-learning and IT expert, discussions amongst the participants attending a special interest group during the 2012 ADEE meeting, and feedback from the evaluation procedures of the exemplar e-module (as described in a companion paper within this Supplement). The main focus of these recommendations is on the courses and modules organised and offered by dental schools. E-modules for dental CPD, as well as for other health professionals' continuing education, have been implemented and evaluated for a number of years. Research shows that the development of e-modules is a team process, undertaken by academics, subject experts, pedagogists, IT and web designers, learning technologists and librarians. The e-module must have clear learning objectives (outcomes), addressing the learners' individual needs, and must be visually attractive, relevant, interactive, promoting critical thinking and providing feedback. The text, graphics and animations must support the objectives and enable the learning process by creating an attractive, easy to navigate and interactive electronic environment. Technology is usually a concern for learners and tutors; therefore, it must be kept simple and interoperable within different systems and software. The pedagogical and technological proficiency of educators is of paramount importance, yet remains a challenge in many instances. The development of e-courses and modules for dental CPD is an endeavour undertaken by a group of professionals. It must be underpinned by sound pedagogical and e-learning principles and must incorporate elements for effective visual learning and visual design and a simple, consistent technology. © 2013 John Wiley & Sons A/S.
Thaler, Lore; Goodale, Melvyn A.
2011-01-01
Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements. PMID:21941474
From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation
Chan, Edgar; Baumann, Oliver; Bellgrove, Mark A.; Mattingley, Jason B.
2012-01-01
Landmarks play an important role in guiding navigational behavior. A host of studies in the last 15 years has demonstrated that environmental objects can act as landmarks for navigation in different ways. In this review, we propose a parsimonious four-part taxonomy for conceptualizing object location information during navigation. We begin by outlining object properties that appear to be important for a landmark to attain salience. We then systematically examine the different functions of objects as navigational landmarks based on previous behavioral and neuroanatomical findings in rodents and humans. Evidence is presented showing that single environmental objects can function as navigational beacons, or act as associative or orientation cues. In addition, we argue that extended surfaces or boundaries can act as landmarks by providing a frame of reference for encoding spatial information. The present review provides a concise taxonomy of the use of visual objects as landmarks in navigation and should serve as a useful reference for future research into landmark-based spatial navigation. PMID:22969737
A study on haptic collaborative game in shared virtual environment
NASA Astrophysics Data System (ADS)
Lu, Keke; Liu, Guanyang; Liu, Lingzhi
2013-03-01
A study on collaborative game in shared virtual environment with haptic feedback over computer networks is introduced in this paper. A collaborative task was used where the players located at remote sites and played the game together. The player can feel visual and haptic feedback in virtual environment compared to traditional networked multiplayer games. The experiment was desired in two conditions: visual feedback only and visual-haptic feedback. The goal of the experiment is to assess the impact of force feedback on collaborative task performance. Results indicate that haptic feedback is beneficial for performance enhancement for collaborative game in shared virtual environment. The outcomes of this research can have a powerful impact on the networked computer games.
ERIC Educational Resources Information Center
Munyofu, Mine
2008-01-01
The purpose of this study was to examine the instructional effectiveness of different levels of chunking (simple visual/text and complex visual/text), different forms of feedback (item-by-item feedback, end-of-test feedback and no feedback), and use of instructional gaming (game and no game) in complementing animated programmed instruction on a…
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Launchers. 175.113 Section 175... SAFETY EQUIPMENT REQUIREMENTS Visual Distress Signals § 175.113 Launchers. (a) When a visual distress signal carried to meet the requirements of § 175.110 requires a launcher to activate, then a launcher...
Effects of kinesthetic and cutaneous stimulation during the learning of a viscous force field.
Rosati, Giulio; Oscari, Fabio; Pacchierotti, Claudio; Prattichizzo, Domenico
2014-01-01
Haptic stimulation can help humans learn perceptual motor skills, but the precise way in which it influences the learning process has not yet been clarified. This study investigates the role of the kinesthetic and cutaneous components of haptic feedback during the learning of a viscous curl field, taking also into account the influence of visual feedback. We present the results of an experiment in which 17 subjects were asked to make reaching movements while grasping a joystick and wearing a pair of cutaneous devices. Each device was able to provide cutaneous contact forces through a moving platform. The subjects received visual feedback about joystick's position. During the experiment, the system delivered a perturbation through (1) full haptic stimulation, (2) kinesthetic stimulation alone, (3) cutaneous stimulation alone, (4) altered visual feedback, or (5) altered visual feedback plus cutaneous stimulation. Conditions 1, 2, and 3 were also tested with the cancellation of the visual feedback of position error. Results indicate that kinesthetic stimuli played a primary role during motor adaptation to the viscous field, which is a fundamental premise to motor learning and rehabilitation. On the other hand, cutaneous stimulation alone appeared not to bring significant direct or adaptation effects, although it helped in reducing direct effects when used in addition to kinesthetic stimulation. The experimental conditions with visual cancellation of position error showed slower adaptation rates, indicating that visual feedback actively contributes to the formation of internal models. However, modest learning effects were detected when the visual information was used to render the viscous field.
Visual force feedback in laparoscopic training.
Horeman, Tim; Rodrigues, Sharon P; van den Dobbelsteen, John J; Jansen, Frank-Willem; Dankelman, Jenny
2012-01-01
To improve endoscopic surgical skills, an increasing number of surgical residents practice on box or virtual reality (VR) trainers. Current training is focused mainly on hand-eye coordination. Training methods that focus on applying the right amount of force are not yet available. The aim of this project is to develop a low-cost training system that measures the interaction force between tissue and instruments and displays a visual representation of the applied forces inside the camera image. This visual representation continuously informs the subject about the magnitude and the direction of applied forces. To show the potential of the developed training system, a pilot study was conducted in which six novices performed a needle-driving task in a box trainer with visual feedback of the force, and six novices performed the same task without visual feedback of the force. All subjects performed the training task five times and were subsequently tested in a post-test without visual feedback. The subjects who received visual feedback during training exerted on average 1.3 N (STD 0.6 N) to drive the needle through the tissue during the post-test. This value was considerably higher for the group that received no feedback (2.6 N, STD 0.9 N). The maximum interaction force during the post-test was noticeably lower for the feedback group (4.1 N, STD 1.1 N) compared with that of the control group (8.0 N, STD 3.3 N). The force-sensing training system provides us with the unique possibility to objectively assess tissue-handling skills in a laboratory setting. The real-time visualization of applied forces during training may facilitate acquisition of tissue-handling skills in complex laparoscopic tasks and could stimulate proficiency gain curves of trainees. However, larger randomized trials that also include other tasks are necessary to determine whether training with visual feedback about forces reduces the interaction force during laparoscopic surgery.
Yarossi, Mathew; Manuweera, Thushini; Adamovich, Sergei V.; Tunik, Eugene
2017-01-01
Mirror visual feedback (MVF) training is a promising technique to promote activation in the lesioned hemisphere following stroke, and aid recovery. However, current outcomes of MVF training are mixed, in part, due to variability in the task undertaken during MVF. The present study investigated the hypothesis that movements directed toward visual targets may enhance MVF modulation of motor cortex (M1) excitability ipsilateral to the trained hand compared to movements without visual targets. Ten healthy subjects participated in a 2 × 2 factorial design in which feedback (veridical, mirror) and presence of a visual target (target present, target absent) for a right index-finger flexion task were systematically manipulated in a virtual environment. To measure M1 excitability, transcranial magnetic stimulation (TMS) was applied to the hemisphere ipsilateral to the trained hand to elicit motor evoked potentials (MEPs) in the untrained first dorsal interosseous (FDI) and abductor digiti minimi (ADM) muscles at rest prior to and following each of four 2-min blocks of 30 movements (B1–B4). Targeted movement kinematics without visual feedback was measured before and after training to assess learning and transfer. FDI MEPs were decreased in B1 and B2 when movements were made with veridical feedback and visual targets were absent. FDI MEPs were decreased in B2 and B3 when movements were made with mirror feedback and visual targets were absent. FDI MEPs were increased in B3 when movements were made with mirror feedback and visual targets were present. Significant MEP changes were not present for the uninvolved ADM, suggesting a task-specific effect. Analysis of kinematics revealed learning occurred in visual target-directed conditions, but transfer was not sensitive to mirror feedback. Results are discussed with respect to current theoretical mechanisms underlying MVF-induced changes in ipsilateral excitability. PMID:28553218
Cogné, Mélanie; Auriacombe, Sophie; Vasa, Louise; Tison, François; Klinger, Evelyne; Sauzéon, Hélène; Joseph, Pierre-Alain; N Kaoua, Bernard
2018-05-01
To evaluate whether visual cues are helpful for virtual spatial navigation and memory in Alzheimer's disease (AD) and patients with mild cognitive impairment (MCI). 20 patients with AD, 18 patients with MCI and 20 age-matched healthy controls (HC) were included. Participants had to actively reproduce a path that included 5 intersections with one landmark at each intersection that they had seen previously during a learning phase. Three cueing conditions for navigation were offered: salient landmarks, directional arrows and a map. A path without additional visual stimuli served as control condition. Navigation time and number of trajectory mistakes were recorded. With the presence of directional arrows, no significant difference was found between groups concerning the number of trajectory mistakes and navigation time. The number of trajectory mistakes did not differ significantly between patients with AD and patients with MCI on the path with arrows, the path with salient landmarks and the path with a map. There were significant correlations between the number of trajectory mistakes under the arrow condition and executive tests, and between the number of trajectory mistakes under the salient landmark condition and memory tests. Visual cueing such as directional arrows and salient landmarks appears helpful for spatial navigation and memory tasks in patients with AD and patients with MCI. This study opens new research avenues for neuro-rehabilitation, such as the use of augmented reality in real-life settings to support the navigational capabilities of patients with MCI and patients with AD. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Proximal versus distal cue utilization in spatial navigation: the role of visual acuity?
Carman, Heidi M; Mactutus, Charles F
2002-09-01
Proximal versus distal cue use in the Morris water maze is a widely accepted strategy for the dissociation of various problems affecting spatial navigation in rats such as aging, head trauma, lesions, and pharmacological or hormonal agents. Of the limited number of ontogenetic rat studies conducted, the majority have approached the problem of preweanling spatial navigation through a similar proximal-distal dissociation. An implicit assumption among all of these studies has been that the animal's visual system is sufficient to permit robust spatial navigation. We challenged this assumption and have addressed the role of visual acuity in spatial navigation in the preweanling Fischer 344-N rat by training animals to locate a visible (proximal) or hidden (distal) platform using double or null extramaze cues within the testing environment. All pups demonstrated improved performance across training, but animals presented with a visible platform, regardless of extramaze cues, simultaneously reached asymptotic performance levels; animals presented with a hidden platform, dependent upon location of extramaze cues, differentially reached asymptotic performance levels. Probe trial performance, defined by quadrant time and platform crossings, revealed that distal-double-cue pups demonstrated spatial navigational ability superior to that of the remaining groups. These results suggest that a pup's ability to spatially navigate a hidden platform is dependent on not only its response repertoire and task parameters, but also its visual acuity, as determined by the extramaze cue location within the testing environment. The standard hidden versus visible platform dissociation may not be a satisfactory strategy for the control of potential sensory deficits.
Effect of visuomotor-map uncertainty on visuomotor adaptation.
Saijo, Naoki; Gomi, Hiroaki
2012-03-01
Vision and proprioception contribute to generating hand movement. If a conflict between the visual and proprioceptive feedback of hand position is given, reaching movement is disturbed initially but recovers after training. Although previous studies have predominantly investigated the adaptive change in the motor output, it is unclear whether the contributions of visual and proprioceptive feedback controls to the reaching movement are modified by visuomotor adaptation. To investigate this, we focused on the change in proprioceptive feedback control associated with visuomotor adaptation. After the adaptation to gradually introduce visuomotor rotation, the hand reached the shifted position of the visual target to move the cursor to the visual target correctly. When the cursor feedback was occasionally eliminated (probe trial), the end point of the hand movement was biased in the visual-target direction, while the movement was initiated in the adapted direction, suggesting the incomplete adaptation of proprioceptive feedback control. Moreover, after the learning of uncertain visuomotor rotation, in which the rotation angle was randomly fluctuated on a trial-by-trial basis, the end-point bias in the probe trial increased, but the initial movement direction was not affected, suggesting a reduction in the adaptation level of proprioceptive feedback control. These results suggest that the change in the relative contribution of visual and proprioceptive feedback controls to the reaching movement in response to the visuomotor-map uncertainty is involved in visuomotor adaptation, whereas feedforward control might adapt in a manner different from that of the feedback control.
The breast cancer patient navigation kit: development and user feedback.
Skrutkowski, Myriam; Saucier, Andréanne; Meterissian, Sarkis
2011-12-01
Our interdisciplinary team developed a written cancer patient education tool, the Breast Cancer Navigation Kit, to respond to the information needs of patients and family members and that meet patient literacy levels. A literature review and a focus group provided content development for four modules: "About Breast Cancer," "Body-Mind-Spirit," "After Treatment Ends," and "Practical Information." An evaluation by 31 women showed the kit to be easy to understand, very useful, and informative. However, all agreed that it could not replace the dialogue with health care professionals. An interdisciplinary approach involving patient feedback is key to develop appropriate patient education tools.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Navigation-guided optic canal decompression for traumatic optic neuropathy: Two case reports.
Bhattacharjee, Kasturi; Serasiya, Samir; Kapoor, Deepika; Bhattacharjee, Harsha
2018-06-01
Two cases of traumatic optic neuropathy presented with profound loss of vision. Both cases received a course of intravenous corticosteroids elsewhere but did not improve. They underwent Navigation guided optic canal decompression via external transcaruncular approach, following which both cases showed visual improvement. Postoperative Visual Evoked Potential and optical coherence technology of Retinal nerve fibre layer showed improvement. These case reports emphasize on the role of stereotactic navigation technology for optic canal decompression in cases of traumatic optic neuropathy.
Prada, F; Del Bene, M; Mattei, L; Lodigiani, L; DeBeni, S; Kolev, V; Vetrano, I; Solbiati, L; Sakas, G; DiMeco, F
2015-04-01
Brain shift and tissue deformation during surgery for intracranial lesions are the main actual limitations of neuro-navigation (NN), which currently relies mainly on preoperative imaging. Ultrasound (US), being a real-time imaging modality, is becoming progressively more widespread during neurosurgical procedures, but most neurosurgeons, trained on axial computed tomography (CT) and magnetic resonance imaging (MRI) slices, lack specific US training and have difficulties recognizing anatomic structures with the same confidence as in preoperative imaging. Therefore real-time intraoperative fusion imaging (FI) between preoperative imaging and intraoperative ultrasound (ioUS) for virtual navigation (VN) is highly desirable. We describe our procedure for real-time navigation during surgery for different cerebral lesions. We performed fusion imaging with virtual navigation for patients undergoing surgery for brain lesion removal using an ultrasound-based real-time neuro-navigation system that fuses intraoperative cerebral ultrasound with preoperative MRI and simultaneously displays an MRI slice coplanar to an ioUS image. 58 patients underwent surgery at our institution for intracranial lesion removal with image guidance using a US system equipped with fusion imaging for neuro-navigation. In all cases the initial (external) registration error obtained by the corresponding anatomical landmark procedure was below 2 mm and the craniotomy was correctly placed. The transdural window gave satisfactory US image quality and the lesion was always detectable and measurable on both axes. Brain shift/deformation correction has been successfully employed in 42 cases to restore the co-registration during surgery. The accuracy of ioUS/MRI fusion/overlapping was confirmed intraoperatively under direct visualization of anatomic landmarks and the error was < 3 mm in all cases (100 %). Neuro-navigation using intraoperative US integrated with preoperative MRI is reliable, accurate and user-friendly. Moreover, the adjustments are very helpful in correcting brain shift and tissue distortion. This integrated system allows true real-time feedback during surgery and is less expensive and time-consuming than other intraoperative imaging techniques, offering high precision and orientation. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Bates, Lisa M.; Hanson, Dennis P.; Kall, Bruce A.; Meyer, Frederic B.; Robb, Richard A.
1998-06-01
An important clinical application of biomedical imaging and visualization techniques is provision of image guided neurosurgical planning and navigation techniques using interactive computer display systems in the operating room. Current systems provide interactive display of orthogonal images and 3D surface or volume renderings integrated with and guided by the location of a surgical probe. However, structures in the 'line-of-sight' path which lead to the surgical target cannot be directly visualized, presenting difficulty in obtaining full understanding of the 3D volumetric anatomic relationships necessary for effective neurosurgical navigation below the cortical surface. Complex vascular relationships and histologic boundaries like those found in artereovenous malformations (AVM's) also contribute to the difficulty in determining optimal approaches prior to actual surgical intervention. These difficulties demonstrate the need for interactive oblique imaging methods to provide 'line-of-sight' visualization. Capabilities for 'line-of- sight' interactive oblique sectioning are present in several current neurosurgical navigation systems. However, our implementation is novel, in that it utilizes a completely independent software toolkit, AVW (A Visualization Workshop) developed at the Mayo Biomedical Imaging Resource, integrated with a current neurosurgical navigation system, the COMPASS stereotactic system at Mayo Foundation. The toolkit is a comprehensive, C-callable imaging toolkit containing over 500 optimized imaging functions and structures. The powerful functionality and versatility of the AVW imaging toolkit provided facile integration and implementation of desired interactive oblique sectioning using a finite set of functions. The implementation of the AVW-based code resulted in higher-level functions for complete 'line-of-sight' visualization.
An indoor navigation system for the visually impaired.
Guerrero, Luis A; Vasquez, Francisco; Ochoa, Sergio F
2012-01-01
Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.
Coding of navigational affordances in the human visual system
Epstein, Russell A.
2017-01-01
A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space. PMID:28416669
Navigation system for a mobile robot with a visual sensor using a fish-eye lens
NASA Astrophysics Data System (ADS)
Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu
1998-02-01
Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.
Huang, Meng; Barber, Sean Michael; Steele, William James; Boghani, Zain; Desai, Viren Rajendrakumar; Britz, Gavin Wayne; West, George Alexander; Trask, Todd Wilson; Holman, Paul Joseph
2018-06-01
Image-guided approaches to spinal instrumentation and interbody fusion have been widely popularized in the last decade [1-5]. Navigated pedicle screws are significantly less likely to breach [2, 3, 5, 6]. Navigation otherwise remains a point reference tool because the projection is off-axis to the surgeon's inline loupe or microscope view. The Synaptive robotic brightmatter drive videoexoscope monitor system represents a new paradigm for off-axis high-definition (HD) surgical visualization. It has many advantages over the traditional microscope and loupes, which have already been demonstrated in a cadaveric study [7]. An auxiliary, but powerful capability of this system is projection of a second, modifiable image in a split-screen configuration. We hypothesized that integration of both Medtronic and Synaptive platforms could permit the visualization of reconstructed navigation and surgical field images simultaneously. By utilizing navigated instruments, this configuration has the ability to support live image-guided surgery or real-time navigation (RTN). Medtronic O-arm/Stealth S7 navigation, MetRx, NavLock, and SureTrak spinal systems were implemented on a prone cadaveric specimen with a stream output to the Synaptive Display. Surgical visualization was provided using a Storz Image S1 platform and camera mounted to the Synaptive robotic brightmatter drive. We were able to successfully technically co-adapt both platforms. A minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) and an open pedicle subtraction osteotomy (PSO) were performed using a navigated high-speed drill under RTN. Disc Shaver and Trials under RTN were implemented on the MIS TLIF. The synergy of Synaptive HD videoexoscope robotic drive and Medtronic Stealth platforms allow for live image-guided surgery or real-time navigation (RTN). Off-axis projection also allows upright neutral cervical spine operative ergonomics for the surgeons and improved surgical team visualization and education compared to traditional means. This technique has the potential to augment existing minimally invasive and open approaches, but will require long-term outcome measurements for efficacy.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Communication and navigation equipment for... § 121.349 Communication and navigation equipment for operations under VFR over routes not navigated by... receiver providing visual and aural signals; and (iii) One ILS receiver; and (3) Any RNAV system used to...
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
Balasubramaniam, Ramesh
2014-01-01
Sensory information from our eyes, skin and muscles helps guide and correct balance. Less appreciated, however, is that delays in the transmission of sensory information between our eyes, limbs and central nervous system can exceed several 10s of milliseconds. Investigating how these time-delayed sensory signals influence balance control is central to understanding the postural system. Here, we investigate how delayed visual feedback and cognitive performance influence postural control in healthy young and older adults. The task required that participants position their center of pressure (COP) in a fixed target as accurately as possible without visual feedback about their COP location (eyes-open balance), or with artificial time delays imposed on visual COP feedback. On selected trials, the participants also performed a silent arithmetic task (cognitive dual task). We separated COP time series into distinct frequency components using low and high-pass filtering routines. Visual feedback delays affected low frequency postural corrections in young and older adults, with larger increases in postural sway noted for the group of older adults. In comparison, cognitive performance reduced the variability of rapid center of pressure displacements in young adults, but did not alter postural sway in the group of older adults. Our results demonstrate that older adults prioritize vision to control posture. This visual reliance persists even when feedback about the task is delayed by several hundreds of milliseconds. PMID:24614576
Patient DF's visual brain in action: Visual feedforward control in visual form agnosia.
Whitwell, Robert L; Milner, A David; Cavina-Pratesi, Cristiana; Barat, Masihullah; Goodale, Melvyn A
2015-05-01
Patient DF, who developed visual form agnosia following ventral-stream damage, is unable to discriminate the width of objects, performing at chance, for example, when asked to open her thumb and forefinger a matching amount. Remarkably, however, DF adjusts her hand aperture to accommodate the width of objects when reaching out to pick them up (grip scaling). While this spared ability to grasp objects is presumed to be mediated by visuomotor modules in her relatively intact dorsal stream, it is possible that it may rely abnormally on online visual or haptic feedback. We report here that DF's grip scaling remained intact when her vision was completely suppressed during grasp movements, and it still dissociated sharply from her poor perceptual estimates of target size. We then tested whether providing trial-by-trial haptic feedback after making such perceptual estimates might improve DF's performance, but found that they remained significantly impaired. In a final experiment, we re-examined whether DF's grip scaling depends on receiving veridical haptic feedback during grasping. In one condition, the haptic feedback was identical to the visual targets. In a second condition, the haptic feedback was of a constant intermediate width while the visual target varied trial by trial. Despite this incongruent feedback, DF still scaled her grip aperture to the visual widths of the target blocks, showing only normal adaptation to the false haptically-experienced width. Taken together, these results strengthen the view that DF's spared grasping relies on a normal mode of dorsal-stream functioning, based chiefly on visual feedforward processing. Copyright © 2014 Elsevier B.V. All rights reserved.
Deyer, T W; Ashton-Miller, J A
1999-09-01
To test the (null) hypotheses that the reliability of unipedal balance is unaffected by the attenuation of visual velocity feedback and that, relative to baseline performance, deterioration of balance success rates from attenuated visual velocity feedback will not differ between groups of young men and older women, and the presence (or absence) of a vertical foreground object will not affect balance success rates. Single blind, single case study. University research laboratory. Two volunteer samples: 26 healthy young men (mean age, 20.0yrs; SD, 1.6); 23 healthy older women (mean age, 64.9 yrs; SD, 7.8). Normalized success rates in unipedal balance task. Subjects were asked to transfer to and maintain unipedal stance for 5 seconds in a task near the limit of their balance capabilities. Subjects completed 64 trials: 54 trials of three experimental visual scenes in blocked randomized sequences of 18 trials and 10 trials in a normal visual environment. The experimental scenes included two that provided strong velocity/weak position feedback, one of which had a vertical foreground object (SVWP+) and one without (SVWP-), and one scene providing weak velocity/strong position (WVSP) feedback. Subjects' success rates in the experimental environments were normalized by the success rate in the normal environment in order to allow comparisons between subjects using a mixed model repeated measures analysis of variance. The normalized success rate was significantly greater in SVWP+ than in WVSP (p = .0001) and SVWP- (p = .013). Visual feedback significantly affected the normalized unipedal balance success rates (p = .001); neither the group effect nor the group X visual environment interaction was significant (p = .9362 and p = .5634, respectively). Normalized success rates did not differ significantly between the young men and older women in any visual environment. Near the limit of the young men's or older women's balance capability, the reliability of transfer to unipedal balance was adversely affected by visual environments offering attenuated visual velocity feedback cues and those devoid of vertical foreground objects.
A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum
NASA Astrophysics Data System (ADS)
Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.
2013-11-01
3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.
NASA Astrophysics Data System (ADS)
Wilson, J. Adam; Walton, Léo M.; Tyler, Mitch; Williams, Justin
2012-08-01
This article describes a new method of providing feedback during a brain-computer interface movement task using a non-invasive, high-resolution electrotactile vision substitution system. We compared the accuracy and movement times during a center-out cursor movement task, and found that the task performance with tactile feedback was comparable to visual feedback for 11 participants. These subjects were able to modulate the chosen BCI EEG features during both feedback modalities, indicating that the type of feedback chosen does not matter provided that the task information is clearly conveyed through the chosen medium. In addition, we tested a blind subject with the tactile feedback system, and found that the training time, accuracy, and movement times were indistinguishable from results obtained from subjects using visual feedback. We believe that BCI systems with alternative feedback pathways should be explored, allowing individuals with severe motor disabilities and accompanying reduced visual and sensory capabilities to effectively use a BCI.
Simulating Navigation with Virtual 3d Geovisualizations - a Focus on Memory Related Factors
NASA Astrophysics Data System (ADS)
Lokka, I.; Çöltekin, A.
2016-06-01
The use of virtual environments (VE) for navigation-related studies, such as spatial cognition and path retrieval has been widely adopted in cognitive psychology and related fields. What motivates the use of VEs for such studies is that, as opposed to real-world, we can control for the confounding variables in simulated VEs. When simulating a geographic environment as a virtual world with the intention to train navigational memory in humans, an effective and efficient visual design is important to facilitate the amount of recall. However, it is not yet clear what amount of information should be included in such visual designs intended to facilitate remembering: there can be too little or too much of it. Besides the amount of information or level of detail, the types of visual features (`elements' in a visual scene) that should be included in the representations to create memorable scenes and paths must be defined. We analyzed the literature in cognitive psychology, geovisualization and information visualization, and identified the key factors for studying and evaluating geovisualization designs for their function to support and strengthen human navigational memory. The key factors we identified are: i) the individual abilities and age of the users, ii) the level of realism (LOR) included in the representations and iii) the context in which the navigation is performed, thus specific tasks within a case scenario. Here we present a concise literature review and our conceptual development for follow-up experiments.
ERIC Educational Resources Information Center
Smorenburg, Ana R. P.; Ledebt, Annick; Deconinck, Frederik J. A.; Savelsbergh, Geert J. P.
2011-01-01
This study examined the active joint-position sense in children with Spastic Hemiparetic Cerebral Palsy (SHCP) and the effect of static visual feedback and static mirror visual feedback, of the non-moving limb, on the joint-position sense. Participants were asked to match the position of one upper limb with that of the contralateral limb. The task…
Valdés, Bulmaro Adolfo; Schneider, Andrea Nicole; Van der Loos, H F Machiel
2017-10-01
To investigate whether the compensatory trunk movements of stroke survivors observed during reaching tasks can be decreased by force and visual feedback, and to examine whether one of these feedback modalities is more efficacious than the other in reducing this compensatory tendency. Randomized crossover trial. University research laboratory. Community-dwelling older adults (N=15; 5 women; mean age, 64±11y) with hemiplegia from nontraumatic hemorrhagic or ischemic stroke (>3mo poststroke), recruited from stroke recovery groups, the research group's website, and the community. In a single session, participants received augmented feedback about their trunk compensation during a bimanual reaching task. Visual feedback (60 trials) was delivered through a computer monitor, and force feedback (60 trials) was delivered through 2 robotic devices. Primary outcome measure included change in anterior trunk displacement measured by motion tracking camera. Secondary outcomes included trunk rotation, index of curvature (measure of straightness of hands' path toward target), root mean square error of hands' movement (differences between hand position on every iteration of the program), completion time for each trial, and posttest questionnaire to evaluate users' experience and system's usability. Both visual (-45.6% [45.8 SD] change from baseline, P=.004) and force (-41.1% [46.1 SD], P=.004) feedback were effective in reducing trunk compensation. Scores on secondary outcome measures did not improve with either feedback modality. Neither feedback condition was superior. Visual and force feedback show promise as 2 modalities that could be used to decrease trunk compensation in stroke survivors during reaching tasks. It remains to be established which one of these 2 feedback modalities is more efficacious than the other as a cue to reduce compensatory trunk movement. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Watch what you type: the role of visual feedback from the screen and hands in skilled typewriting.
Snyder, Kristy M; Logan, Gordon D; Yamaguchi, Motonori
2015-01-01
Skilled typing is controlled by two hierarchically structured processing loops (Logan & Crump, 2011): The outer loop, which produces words, commands the inner loop, which produces keystrokes. Here, we assessed the interplay between the two loops by investigating how visual feedback from the screen (responses either were or were not echoed on the screen) and the hands (the hands either were or were not covered with a box) influences the control of skilled typing. Our results indicated, first, that the reaction time of the first keystroke was longer when responses were not echoed than when they were. Also, the interkeystroke interval (IKSI) was longer when the hands were covered than when they were visible, and the IKSI for responses that were not echoed was longer when explicit error monitoring was required (Exp. 2) than when it was not required (Exp. 1). Finally, explicit error monitoring was more accurate when response echoes were present than when they were absent, and implicit error monitoring (i.e., posterror slowing) was not influenced by visual feedback from the screen or the hands. These findings suggest that the outer loop adjusts the inner-loop timing parameters to compensate for reductions in visual feedback. We suggest that these adjustments are preemptive control strategies designed to execute keystrokes more cautiously when visual feedback from the hands is absent, to generate more cautious motor programs when visual feedback from the screen is absent, and to enable enough time for the outer loop to monitor keystrokes when visual feedback from the screen is absent and explicit error reports are required.
Robust analysis of an underwater navigational strategy in electrically heterogeneous corridors.
Dimble, Kedar D; Ranganathan, Badri N; Keshavan, Jishnu; Humbert, J Sean
2016-08-01
Obstacles and other global stimuli provide relevant navigational cues to a weakly electric fish. In this work, robust analysis of a control strategy based on electrolocation for performing obstacle avoidance in electrically heterogeneous corridors is presented and validated. Static output feedback control is shown to achieve the desired goal of reflexive obstacle avoidance in such environments in simulation and experimentation. The proposed approach is computationally inexpensive and readily implementable on a small scale underwater vehicle, making underwater autonomous navigation feasible in real-time.
Reissig, Paola; Garry, Michael I; Summers, Jeffery J; Hinder, Mark R
2014-01-01
Provision of a mirror image of a hand undertaking a motor task (i.e., mirror therapy) elicits behavioural improvements in the inactive hand. A greater understanding of the neural mechanisms underpinning this phenomenon is required to maximise its potential for rehabilitation across the lifespan, e.g., following hemiparesis or unilateral weakness. Young and older participants performed unilateral finger abductions with no visual feedback, with feedback of the active or passive hands, or with a mirror image of the active hand. Transcranial magnetic stimulation was used to assess feedback-related changes in two neurophysiological measures thought to be involved in inter-manual transfer of skill, namely corticospinal excitability (CSE) and intracortical inhibition (SICI) in the passive hemisphere. Task performance led to CSE increases, accompanied by decreases of SICI, in all visual feedback conditions relative to rest. However, the changes due to mirror feedback were not significantly different to those observed in the other (more standard) visual conditions. Accordingly, the unimanual motor action itself, rather than modifications in visual feedback, appears more instrumental in driving changes in CSE and SICI. Therefore, changes in CSE and SICI are unlikely to underpin the behavioural benefits of mirror therapy. We discuss implications for rehabilitation and directions of future research.
Evaluation of a technique to simplify area navigation and required navigation performance charts
DOT National Transportation Integrated Search
2013-06-30
Performance based navigation (PBN), an enabler for the Federal Aviation Administration's Next Generation Air Transportation System (NextGEN), supports the design of more precise flight procedures. However, these new procedures can be visually complex...
ERIC Educational Resources Information Center
Markle, Ross; Olivera-Aguilar, Margarita; Jackson, Teresa; Noeth, Richard; Robbins, Steven
2013-01-01
The "SuccessNavigator"™ assessment is an online, 30 minute self-assessment of psychosocial and study skills designed for students entering postsecondary education. In addition to providing feedback in areas such as classroom and study behaviors, commitment to educational goals, management of academic stress, and connection to social…
Intraoperative 3-Dimensional Computed Tomography and Navigation in Foot and Ankle Surgery.
Chowdhary, Ashwin; Drittenbass, Lisca; Dubois-Ferrière, Victor; Stern, Richard; Assal, Mathieu
2016-09-01
Computer-assisted orthopedic surgery has developed dramatically during the past 2 decades. This article describes the use of intraoperative 3-dimensional computed tomography and navigation in foot and ankle surgery. Traditional imaging based on serial radiography or C-arm-based fluoroscopy does not provide simultaneous real-time 3-dimensional imaging, and thus leads to suboptimal visualization and guidance. Three-dimensional computed tomography allows for accurate intraoperative visualization of the position of bones and/or navigation implants. Such imaging and navigation helps to further reduce intraoperative complications, leads to improved surgical outcomes, and may become the gold standard in foot and ankle surgery. [Orthopedics.2016; 39(5):e1005-e1010.]. Copyright 2016, SLACK Incorporated.
Cortical feedback signals generalise across different spatial frequencies of feedforward inputs.
Revina, Yulia; Petro, Lucy S; Muckli, Lars
2017-09-22
Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Brayfield, Brad P.
2016-01-01
The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects’ brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path’s end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery. PMID:27119720
Scarfe, Amy C.; Moore, Brian C. J.; Pardhan, Shahina
2017-01-01
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound. PMID:28407000
Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina
2017-01-01
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.
ERIC Educational Resources Information Center
Kraemer, David J. M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.
2017-01-01
Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to…
Visual Odometry for Autonomous Deep-Space Navigation
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.
Effects of visual feedback with a mirror on balance ability in patients with stroke.
In, Tae-Sung; Cha, Yu-Ri; Jung, Jin-Hwa; Jung, Kyoung-Sim
2016-01-01
[Purpose] This study aimed to examine the effects of a visual feedback obtained from a mirror on balance ability during quiet standing in patients with stroke. [Subjects] Fifteen patients with stroke (9 males, 6 females) enrolled in the study. [Methods] Experimental trials (duration, 20s) included three visual conditions (eyes closed, eyes open, and mirror feedback) and two support surface conditions (stable, and unstable). Center of pressure (COP) displacements in the mediolateral and anteroposterior directions were recorded using a force platform. [Results] No effect of condition was observed along all directions on the stable surface. An effect of condition was observed on the unstable surface, with a smaller mediolateral COP distance in the mirror feedback as compared to the other two conditions. Similar results were observed for the COP speed. [Conclusion] Visual feedback from a mirror is beneficial for improving balance ability during quiet standing on an unstable surface in patients with stroke.
Ganz, Aura; Schafer, James; Gandhi, Siddhesh; Puleo, Elaine; Wilson, Carole; Robertson, Meg
2012-01-01
We introduce PERCEPT system, an indoor navigation system for the blind and visually impaired. PERCEPT will improve the quality of life and health of the visually impaired community by enabling independent living. Using PERCEPT, blind users will have independent access to public health facilities such as clinics, hospitals, and wellness centers. Access to healthcare facilities is crucial for this population due to the multiple health conditions that they face such as diabetes and its complications. PERCEPT system trials with 24 blind and visually impaired users in a multistory building show PERCEPT system effectiveness in providing appropriate navigation instructions to these users. The uniqueness of our system is that it is affordable and that its design follows orientation and mobility principles. We hope that PERCEPT will become a standard deployed in all indoor public spaces, especially in healthcare and wellness facilities. PMID:23316225
A guidance law for hypersonic descent to a point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisler, G.R.; Hull, D.G.
1992-05-01
A neighboring external control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, a neighboring optimal path computation about the nominal provides a lift and side-force perturbations necessary to achieve the target downrange and crossrange. On-line feedback simulations of the proposed scheme and a form of proportional navigation are compared with an off-line parameter optimization method. The neighboring optimal terminal velocity compares very well with the parameter optimization solution and ismore » far superior to proportional navigation. 8 refs.« less
A guidance law for hypersonic descent to a point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisler, G.R.; Hull, D.G.
1992-01-01
A neighboring external control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, a neighboring optimal path computation about the nominal provides a lift and side-force perturbations necessary to achieve the target downrange and crossrange. On-line feedback simulations of the proposed scheme and a form of proportional navigation are compared with an off-line parameter optimization method. The neighboring optimal terminal velocity compares very well with the parameter optimization solution and ismore » far superior to proportional navigation. 8 refs.« less
Vision-based flight control in the hawkmoth Hyles lineata
Windsor, Shane P.; Bomphrey, Richard J.; Taylor, Graham K.
2014-01-01
Vision is a key sensory modality for flying insects, playing an important role in guidance, navigation and control. Here, we use a virtual-reality flight simulator to measure the optomotor responses of the hawkmoth Hyles lineata, and use a published linear-time invariant model of the flight dynamics to interpret the function of the measured responses in flight stabilization and control. We recorded the forces and moments produced during oscillation of the visual field in roll, pitch and yaw, varying the temporal frequency, amplitude or spatial frequency of the stimulus. The moths’ responses were strongly dependent upon contrast frequency, as expected if the optomotor system uses correlation-type motion detectors to sense self-motion. The flight dynamics model predicts that roll angle feedback is needed to stabilize the lateral dynamics, and that a combination of pitch angle and pitch rate feedback is most effective in stabilizing the longitudinal dynamics. The moths’ responses to roll and pitch stimuli coincided qualitatively with these functional predictions. The moths produced coupled roll and yaw moments in response to yaw stimuli, which could help to reduce the energetic cost of correcting heading. Our results emphasize the close relationship between physics and physiology in the stabilization of insect flight. PMID:24335557
An electronic dashboard to improve nursing care.
Tan, Yung-Ming; Hii, Joshua; Chan, Katherine; Sardual, Robert; Mah, Benjamin
2013-01-01
With the introduction of CPOE systems, nurses in a Singapore hospital were facing difficulties monitoring key patient information such as critical tasks and alerts. Issues include unfriendly user interfaces of clinical systems, information overload, and the loss of visual cues for action due to paperless workflows. The hospital decided to implement an interactive electronic dashboard on top of their CPOE system to improve visibility of vital patient data. A post-implementation survey was performed to gather end-user feedback and evaluate factors that influence user satisfaction of the dashboard. Questionnaires were sent to all nurses of five pilot wards. 106 valid responses were received. User adoption was good with 86% of nurses using the dashboard every shift. Mean satisfaction score was 3.6 out of 5. User satisfaction was strongly and positively correlated to the system's perceived impact on work efficiency and care quality. From qualitative feedback, nurses generally agreed that the dashboard had improved their awareness of critical patient issues without the hassle of navigating a CPOE system. This study shows that an interactive clinical dashboard when properly integrated with a CPOE system could be a useful tool to improve daily patient care.
Vision-based flight control in the hawkmoth Hyles lineata.
Windsor, Shane P; Bomphrey, Richard J; Taylor, Graham K
2014-02-06
Vision is a key sensory modality for flying insects, playing an important role in guidance, navigation and control. Here, we use a virtual-reality flight simulator to measure the optomotor responses of the hawkmoth Hyles lineata, and use a published linear-time invariant model of the flight dynamics to interpret the function of the measured responses in flight stabilization and control. We recorded the forces and moments produced during oscillation of the visual field in roll, pitch and yaw, varying the temporal frequency, amplitude or spatial frequency of the stimulus. The moths' responses were strongly dependent upon contrast frequency, as expected if the optomotor system uses correlation-type motion detectors to sense self-motion. The flight dynamics model predicts that roll angle feedback is needed to stabilize the lateral dynamics, and that a combination of pitch angle and pitch rate feedback is most effective in stabilizing the longitudinal dynamics. The moths' responses to roll and pitch stimuli coincided qualitatively with these functional predictions. The moths produced coupled roll and yaw moments in response to yaw stimuli, which could help to reduce the energetic cost of correcting heading. Our results emphasize the close relationship between physics and physiology in the stabilization of insect flight.
A bio-inspired flying robot sheds light on insect piloting abilities.
Franceschini, Nicolas; Ruffier, Franck; Serres, Julien
2007-02-20
When insects are flying forward, the image of the ground sweeps backward across their ventral viewfield and forms an "optic flow," which depends on both the groundspeed and the groundheight. To explain how these animals manage to avoid the ground by using this visual motion cue, we suggest that insect navigation hinges on a visual-feedback loop we have called the optic-flow regulator, which controls the vertical lift. To test this idea, we built a micro-helicopter equipped with an optic-flow regulator and a bio-inspired optic-flow sensor. This fly-by-sight micro-robot can perform exacting tasks such as take-off, level flight, and landing. Our control scheme accounts for many hitherto unexplained findings published during the last 70 years on insects' visually guided performances; for example, it accounts for the fact that honeybees descend in a headwind, land with a constant slope, and drown when travelling over mirror-smooth water. Our control scheme explains how insects manage to fly safely without any of the instruments used onboard aircraft to measure the groundheight, groundspeed, and descent speed. An optic-flow regulator is quite simple in terms of its neural implementation and just as appropriate for insects as it would be for aircraft.
Teufel, Julian; Bardins, S; Spiegel, Rainer; Kremmyda, O; Schneider, E; Strupp, M; Kalla, R
2016-01-04
Patients with downbeat nystagmus syndrome suffer from oscillopsia, which leads to an unstable visual perception and therefore impaired visual acuity. The aim of this study was to use real-time computer-based visual feedback to compensate for the destabilizing slow phase eye movements. The patients were sitting in front of a computer screen with the head fixed on a chin rest. The eye movements were recorded by an eye tracking system (EyeSeeCam®). We tested the visual acuity with a fixed Landolt C (static) and during real-time feedback driven condition (dynamic) in gaze straight ahead and (20°) sideward gaze. In the dynamic condition, the Landolt C moved according to the slow phase eye velocity of the downbeat nystagmus. The Shapiro-Wilk test was used to test for normal distribution and one-way ANOVA for comparison. Ten patients with downbeat nystagmus were included in the study. Median age was 76 years and the median duration of symptoms was 6.3 years (SD +/- 3.1y). The mean slow phase velocity was moderate during gaze straight ahead (1.44°/s, SD +/- 1.18°/s) and increased significantly in sideward gaze (mean left 3.36°/s; right 3.58°/s). In gaze straight ahead, we found no difference between the static and feedback driven condition. In sideward gaze, visual acuity improved in five out of ten subjects during the feedback-driven condition (p = 0.043). This study provides proof of concept that non-invasive real-time computer-based visual feedback compensates for the SPV in DBN. Therefore, real-time visual feedback may be a promising aid for patients suffering from oscillopsia and impaired text reading on screen. Recent technological advances in the area of virtual reality displays might soon render this approach feasible in fully mobile settings.
Tactile-Foot Stimulation Can Assist the Navigation of People with Visual Impairment
Velázquez, Ramiro; Pissaloux, Edwige; Lay-Ekuakille, Aimé
2015-01-01
Background. Tactile interfaces that stimulate the plantar surface with vibrations could represent a step forward toward the development of wearable, inconspicuous, unobtrusive, and inexpensive assistive devices for people with visual impairments. Objective. To study how people understand information through their feet and to maximize the capabilities of tactile-foot perception for assisting human navigation. Methods. Based on the physiology of the plantar surface, three prototypes of electronic tactile interfaces for the foot have been developed. With important technological improvements between them, all three prototypes essentially consist of a set of vibrating actuators embedded in a foam shoe-insole. Perceptual experiments involving direction recognition and real-time navigation in space were conducted with a total of 60 voluntary subjects. Results. The developed prototypes demonstrated that they are capable of transmitting tactile information that is easy and fast to understand. Average direction recognition rates were 76%, 88.3%, and 94.2% for subjects wearing the first, second, and third prototype, respectively. Exhibiting significant advances in tactile-foot stimulation, the third prototype was evaluated in navigation tasks. Results show that subjects were capable of following directional instructions useful for navigating spaces. Conclusion. Footwear providing tactile stimulation can be considered for assisting the navigation of people with visual impairments. PMID:27019593
Tactile-Foot Stimulation Can Assist the Navigation of People with Visual Impairment.
Velázquez, Ramiro; Pissaloux, Edwige; Lay-Ekuakille, Aimé
2015-01-01
Background. Tactile interfaces that stimulate the plantar surface with vibrations could represent a step forward toward the development of wearable, inconspicuous, unobtrusive, and inexpensive assistive devices for people with visual impairments. Objective. To study how people understand information through their feet and to maximize the capabilities of tactile-foot perception for assisting human navigation. Methods. Based on the physiology of the plantar surface, three prototypes of electronic tactile interfaces for the foot have been developed. With important technological improvements between them, all three prototypes essentially consist of a set of vibrating actuators embedded in a foam shoe-insole. Perceptual experiments involving direction recognition and real-time navigation in space were conducted with a total of 60 voluntary subjects. Results. The developed prototypes demonstrated that they are capable of transmitting tactile information that is easy and fast to understand. Average direction recognition rates were 76%, 88.3%, and 94.2% for subjects wearing the first, second, and third prototype, respectively. Exhibiting significant advances in tactile-foot stimulation, the third prototype was evaluated in navigation tasks. Results show that subjects were capable of following directional instructions useful for navigating spaces. Conclusion. Footwear providing tactile stimulation can be considered for assisting the navigation of people with visual impairments.
McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M
2017-10-01
Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for altered movement dynamics are largely unknown. Here we examined the influence of 1 ) delayed and 2 ) removed visual feedback on the adaptation to novel movement dynamics. These results contribute to understanding of the control strategies that compensate for movement errors when there is a temporal separation between motion state and sensory information. Copyright © 2017 the American Physiological Society.
Kang, Youn Joo; Park, Hae Kyung; Kim, Hyun Jung; Lim, Taeo; Ku, Jeonghun; Cho, Sangwoo; Kim, Sun I; Park, Eun Sook
2012-10-04
Several experimental studies in stroke patients suggest that mirror therapy and various virtual reality programs facilitate motor rehabilitation. However, the underlying mechanisms for these therapeutic effects have not been previously described. We attempted to delineate the changes in corticospinal excitability when individuals were asked to exercise their upper extremity using a real mirror and virtual mirror. Moreover, we attempted to delineate the role of visual modulation within the virtual environment that affected corticospinal excitability in healthy subjects and stroke patients. A total of 18 healthy subjects and 18 hemiplegic patients were enrolled into the study. Motor evoked potential (MEP)s from transcranial magnetic stimulation were recorded in the flexor carpi radialis of the non-dominant or affected upper extremity using three different conditions: (A) relaxation; (B) real mirror; and (C) virtual mirror. Moreover, we compared the MEPs from the virtual mirror paradigm using continuous visual feedback or intermittent visual feedback. The rates of amplitude increment and latency decrement of MEPs in both groups were higher during the virtual mirror task than during the real mirror. In healthy subjects and stroke patients, the virtual mirror task with intermittent visual feedback significantly facilitated corticospinal excitability of MEPs compared with continuous visual feedback. Corticospinal excitability was facilitated to a greater extent in the virtual mirror paradigm than in the real mirror and in intermittent visual feedback than in the continuous visual feedback, in both groups. This provides neurophysiological evidence supporting the application of the virtual mirror paradigm using various visual modulation technologies to upper extremity rehabilitation in stroke patients.
2012-01-01
Background Several experimental studies in stroke patients suggest that mirror therapy and various virtual reality programs facilitate motor rehabilitation. However, the underlying mechanisms for these therapeutic effects have not been previously described. Objectives We attempted to delineate the changes in corticospinal excitability when individuals were asked to exercise their upper extremity using a real mirror and virtual mirror. Moreover, we attempted to delineate the role of visual modulation within the virtual environment that affected corticospinal excitability in healthy subjects and stroke patients. Methods A total of 18 healthy subjects and 18 hemiplegic patients were enrolled into the study. Motor evoked potential (MEP)s from transcranial magnetic stimulation were recorded in the flexor carpi radialis of the non-dominant or affected upper extremity using three different conditions: (A) relaxation; (B) real mirror; and (C) virtual mirror. Moreover, we compared the MEPs from the virtual mirror paradigm using continuous visual feedback or intermittent visual feedback. Results The rates of amplitude increment and latency decrement of MEPs in both groups were higher during the virtual mirror task than during the real mirror. In healthy subjects and stroke patients, the virtual mirror task with intermittent visual feedback significantly facilitated corticospinal excitability of MEPs compared with continuous visual feedback. Conclusion Corticospinal excitability was facilitated to a greater extent in the virtual mirror paradigm than in the real mirror and in intermittent visual feedback than in the continuous visual feedback, in both groups. This provides neurophysiological evidence supporting the application of the virtual mirror paradigm using various visual modulation technologies to upper extremity rehabilitation in stroke patients. PMID:23035951
Effects of Vibrotactile Feedback on Human Learning of Arm Motions
Bark, Karlin; Hyman, Emily; Tan, Frank; Cha, Elizabeth; Jax, Steven A.; Buxbaum, Laurel J.; Kuchenbecker, Katherine J.
2015-01-01
Tactile cues generated from lightweight, wearable actuators can help users learn new motions by providing immediate feedback on when and how to correct their movements. We present a vibrotactile motion guidance system that measures arm motions and provides vibration feedback when the user deviates from a desired trajectory. A study was conducted to test the effects of vibrotactile guidance on a subject’s ability to learn arm motions. Twenty-six subjects learned motions of varying difficulty with both visual (V), and visual and vibrotactile (VVT) feedback over the course of four days of training. After four days of rest, subjects returned to perform the motions from memory with no feedback. We found that augmenting visual feedback with vibrotactile feedback helped subjects reduce the root mean square (rms) angle error of their limb significantly while they were learning the motions, particularly for 1DOF motions. Analysis of the retention data showed no significant difference in rms angle errors between feedback conditions. PMID:25486644
Spacecraft Guidance, Navigation, and Control Visualization Tool
NASA Technical Reports Server (NTRS)
Mandic, Milan; Acikmese, Behcet; Blackmore, Lars
2011-01-01
G-View is a 3D visualization tool for supporting spacecraft guidance, navigation, and control (GN&C) simulations relevant to small-body exploration and sampling (see figure). The tool is developed in MATLAB using Virtual Reality Toolbox and provides users with the ability to visualize the behavior of their simulations, regardless of which programming language (or machine) is used to generate simulation results. The only requirement is that multi-body simulation data is generated and placed in the proper format before applying G-View.
Neubauer, Aljoscha S; Langer, Julian; Liegl, Raffael; Haritoglou, Christos; Wolf, Armin; Kozak, Igor; Seidensticker, Florian; Ulbig, Michael; Freeman, William R; Kampik, Anselm; Kernt, Marcus
2013-01-01
The purpose of this study was to evaluate and compare clinical outcomes and retreatment rates using navigated macular laser versus conventional laser for the treatment of diabetic macular edema (DME). In this prospective, interventional pilot study, 46 eyes from 46 consecutive patients with DME were allocated to receive macular laser photocoagulation using navigated laser. Best corrected visual acuity and retreatment rate were evaluated for up to 12 months after treatment. The control group was drawn based on chart review of 119 patients treated by conventional laser at the same institutions during the same time period. Propensity score matching was performed with Stata, based on the nearest-neighbor method. Propensity score matching for age, gender, baseline visual acuity, and number of laser spots yielded 28 matched patients for the control group. Visual acuity after navigated macular laser improved from a mean 0.48 ± 0.37 logMAR by a mean +2.9 letters after 3 months, while the control group showed a mean -4.0 letters (P = 0.03). After 6 months, navigated laser maintained a mean visual gain of +3.3 letters, and the conventional laser group showed a slower mean increase to +1.9 letters versus baseline. Using Kaplan-Meier analysis, the laser retreatment rate showed separation of the survival curves after 2 months, with fewer retreatments in the navigated group than in the conventional laser group during the first 8 months (18% versus 31%, respectively, P = 0.02). The short-term results of this pilot study suggest that navigated macular photocoagulation is an effective technique and could be considered as a valid alternative to conventional slit-lamp laser for DME when focal laser photocoagulation is indicated. The observed lower retreatment rates with navigated retinal laser therapy in the first 8 months suggest a more durable treatment effect.
Zhang, Xi; Miao, Lingjuan; Shao, Haijun
2016-01-01
If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper. PMID:27144570
Zhang, Xi; Miao, Lingjuan; Shao, Haijun
2016-05-02
If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper.
Shape Perception and Navigation in Blind Adults
Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara
2017-01-01
Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226
White, Paul J.F.; Moussavi, Zahra
2016-01-01
In this case study, a man at the onset of Alzheimer’s disease (AD) was enrolled in a cognitive treatment program based upon spatial navigation in a virtual reality (VR) environment. We trained him to navigate to targets in a symmetric, landmark-less virtual building. Our research goals were to determine whether an individual with AD could learn to navigate in a simple VR navigation (VRN) environment and whether that training could also bring real-life cognitive benefits. The results show that our participant learned to perfectly navigate to desired targets in the VRN environment over the course of the training program. Furthermore, subjective feedback from his primary caregiver (his wife) indicated that his skill at navigating while driving improved noticeably and that he enjoyed cognitive improvement in his daily life at home. These results suggest that VRN treatments might benefit other people with AD. PMID:27840579
Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.
Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe
2017-09-01
Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
An Indoor Navigation System for the Visually Impaired
Guerrero, Luis A.; Vasquez, Francisco; Ochoa, Sergio F.
2012-01-01
Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment. PMID:22969398
Corticothalamic feedback enhances stimulus response precision in the visual system
Andolina, Ian M.; Jones, Helen E.; Wang, Wei; Sillito, Adam M.
2007-01-01
There is a tightly coupled bidirectional interaction between visual cortex and visual thalamus [lateral geniculate nucleus (LGN)]. Using drifting sinusoidal grating stimuli, we compared the response of cells in the LGN with and without feedback from the visual cortex. Raster plots revealed a striking difference in the response pattern of cells with and without feedback. This difference was reflected in the results from computing vector sum plots and the ratio of zero harmonic to the fundamental harmonic of the fast Fourier transform (FFT) for these responses. The variability of responses assessed by using the Fano factor was also different for the two groups, with the cells without feedback showing higher variability. We examined the covariance of these measures between pairs of simultaneously recorded cells with and without feedback, and they were much more strongly positively correlated with feedback. We constructed orientation tuning curves from the central 5 ms in the raw cross-correlograms of the outputs of pairs of LGN cells, and these curves revealed much sharper tuning with feedback. We discuss the significance of these data for cortical function and suggest that the precision in stimulus-linked firing in the LGN appears as an emergent factor from the corticothalamic interaction. PMID:17237220
Learning feedback and feedforward control in a mirror-reversed visual environment.
Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi; Diedrichsen, Jörn
2015-10-01
When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. Copyright © 2015 the American Physiological Society.
Learning feedback and feedforward control in a mirror-reversed visual environment
Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi
2015-01-01
When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. PMID:26245313
Stability of hand force production. I. Hand level control variables and multifinger synergies.
Reschechtko, Sasha; Latash, Mark L
2017-12-01
We combined the theory of neural control of movement with referent coordinates and the uncontrolled manifold hypothesis to explore synergies stabilizing the hand action in accurate four-finger pressing tasks. In particular, we tested a hypothesis on two classes of synergies, those among the four fingers and those within a pair of control variables, stabilizing hand action under visual feedback and disappearing without visual feedback. Subjects performed four-finger total force and moment production tasks under visual feedback; the feedback was later partially or completely removed. The "inverse piano" device was used to lift and lower the fingers smoothly at the beginning and at the end of each trial. These data were used to compute pairs of hypothetical control variables. Intertrial analysis of variance within the finger force space was used to quantify multifinger synergies stabilizing both force and moment. A data permutation method was used to quantify synergies among control variables. Under visual feedback, synergies in the spaces of finger forces and hypothetical control variables were found to stabilize total force. Without visual feedback, the subjects showed a force drift to lower magnitudes and a moment drift toward pronation. This was accompanied by disappearance of the four-finger synergies and strong attenuation of the control variable synergies. The indexes of the two types of synergies correlated with each other. The findings are interpreted within the scheme with multiple levels of abundant variables. NEW & NOTEWORTHY We extended the idea of hierarchical control with referent spatial coordinates for the effectors and explored two types of synergies stabilizing multifinger force production tasks. We observed synergies among finger forces and synergies between hypothetical control variables that stabilized performance under visual feedback but failed to stabilize it after visual feedback had been removed. Indexes of two types of synergies correlated with each other. The data suggest the existence of multiple mechanisms stabilizing motor actions. Copyright © 2017 the American Physiological Society.
Evolved Navigation Theory and Horizontal Visual Illusions
ERIC Educational Resources Information Center
Jackson, Russell E.; Willey, Chela R.
2011-01-01
Environmental perception is prerequisite to most vertebrate behavior and its modern investigation initiated the founding of experimental psychology. Navigation costs may affect environmental perception, such as overestimating distances while encumbered (Solomon, 1949). However, little is known about how this occurs in real-world navigation or how…
ERIC Educational Resources Information Center
Majlesi, Ali Reza
2018-01-01
This study aims to show how multimodality, that is, the mobilization of various communicative resources in social actions (Mondada, 2016), can be used to teach grammar. Drawing on ethnomethodological conversation analysis (Sacks, 1992), the article provides a detailed analysis of 2 corrective feedback sequences in a Swedish-as-a-second-language…
ERIC Educational Resources Information Center
Winarno, Sri; Muthu, Kalaiarasi Sonai; Ling, Lew Sook
2018-01-01
This study presents students' feedback and learning impact on design and development of a multimedia learning in Direct Problem-Based Learning approach (mDPBL) for Computer Networks in Dian Nuswantoro University, Indonesia. This study examined the usefulness, contents and navigation of the multimedia learning as well as learning impacts towards…
Neural mechanisms of limb position estimation in the primate brain.
Shi, Ying; Buneo, Christopher A
2011-01-01
Understanding the neural mechanisms of limb position estimation is important both for comprehending the neural control of goal directed arm movements and for developing neuroprosthetic systems designed to replace lost limb function. Here we examined the role of area 5 of the posterior parietal cortex in estimating limb position based on visual and somatic (proprioceptive, efference copy) signals. Single unit recordings were obtained as monkeys reached to visual targets presented in a semi-immersive virtual reality environment. On half of the trials animals were required to maintain their limb position at these targets while receiving both visual and non-visual feedback of their arm position, while on the other trials visual feedback was withheld. When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons modulated their firing rates based on the presence/absence of visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level.
33 CFR 149.135 - What should be marked on the cargo transfer system alarm switch?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false What should be marked on the cargo transfer system alarm switch? 149.135 Section 149.135 Navigation and Navigable Waters COAST GUARD... switch? Each switch for activating an alarm, and each audio or visual device for signaling an alarm, must...
33 CFR 149.135 - What should be marked on the cargo transfer system alarm switch?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false What should be marked on the cargo transfer system alarm switch? 149.135 Section 149.135 Navigation and Navigable Waters COAST GUARD... switch? Each switch for activating an alarm, and each audio or visual device for signaling an alarm, must...
33 CFR 149.135 - What should be marked on the cargo transfer system alarm switch?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false What should be marked on the cargo transfer system alarm switch? 149.135 Section 149.135 Navigation and Navigable Waters COAST GUARD... switch? Each switch for activating an alarm, and each audio or visual device for signaling an alarm, must...
33 CFR 149.135 - What should be marked on the cargo transfer system alarm switch?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false What should be marked on the cargo transfer system alarm switch? 149.135 Section 149.135 Navigation and Navigable Waters COAST GUARD... switch? Each switch for activating an alarm, and each audio or visual device for signaling an alarm, must...
33 CFR 149.135 - What should be marked on the cargo transfer system alarm switch?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false What should be marked on the cargo transfer system alarm switch? 149.135 Section 149.135 Navigation and Navigable Waters COAST GUARD... switch? Each switch for activating an alarm, and each audio or visual device for signaling an alarm, must...
2011-01-01
Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated with controlling an affected arm make the motor system more prone to slack when distracted. Providing an alternate sensory channel for feedback, i.e., auditory feedback of tracking error, enabled the participants to simultaneously perform the tracking task and distracter task effectively. Thus, incorporating real-time auditory feedback of performance errors might improve clinical outcomes of robotic therapy systems. PMID:21513561
Tapia, Evelina; Beck, Diane M
2014-01-01
A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness.
Neural correlates of virtual route recognition in congenital blindness.
Kupers, Ron; Chebat, Daniel R; Madsen, Kristoffer H; Paulson, Olaf B; Ptito, Maurice
2010-07-13
Despite the importance of vision for spatial navigation, blind subjects retain the ability to represent spatial information and to move independently in space to localize and reach targets. However, the neural correlates of navigation in subjects lacking vision remain elusive. We therefore used functional MRI (fMRI) to explore the cortical network underlying successful navigation in blind subjects. We first trained congenitally blind and blindfolded sighted control subjects to perform a virtual navigation task with the tongue display unit (TDU), a tactile-to-vision sensory substitution device that translates a visual image into electrotactile stimulation applied to the tongue. After training, participants repeated the navigation task during fMRI. Although both groups successfully learned to use the TDU in the virtual navigation task, the brain activation patterns showed substantial differences. Blind but not blindfolded sighted control subjects activated the parahippocampus and visual cortex during navigation, areas that are recruited during topographical learning and spatial representation in sighted subjects. When the navigation task was performed under full vision in a second group of sighted participants, the activation pattern strongly resembled the one obtained in the blind when using the TDU. This suggests that in the absence of vision, cross-modal plasticity permits the recruitment of the same cortical network used for spatial navigation tasks in sighted subjects.
Visual Place Learning in Drosophila melanogaster
Ofstad, Tyler A.; Zuker, Charles S.; Reiser, Michael B.
2011-01-01
The ability of insects to learn and navigate to specific locations in the environment has fascinated naturalists for decades. While the impressive navigation abilities of ants, bees, wasps, and other insects clearly demonstrate that insects are capable of visual place learning1–4, little is known about the underlying neural circuits that mediate these behaviors. Drosophila melanogaster is a powerful model organism for dissecting the neural circuitry underlying complex behaviors, from sensory perception to learning and memory. Flies can identify and remember visual features such as size, color, and contour orientation5, 6. However, the extent to which they use vision to recall specific locations remains unclear. Here we describe a visual place-learning platform and demonstrate that Drosophila are capable of forming and retaining visual place memories to guide selective navigation. By targeted genetic silencing of small subsets of cells in the Drosophila brain we show that neurons in the ellipsoid body, but not in the mushroom bodies, are necessary for visual place learning. Together, these studies reveal distinct neuroanatomical substrates for spatial versus non-spatial learning, and substantiate Drosophila as a powerful model for the study of spatial memories. PMID:21654803
Vision and visual navigation in nocturnal insects.
Warrant, Eric; Dacke, Marie
2011-01-01
With their highly sensitive visual systems, nocturnal insects have evolved a remarkable capacity to discriminate colors, orient themselves using faint celestial cues, fly unimpeded through a complicated habitat, and navigate to and from a nest using learned visual landmarks. Even though the compound eyes of nocturnal insects are significantly more sensitive to light than those of their closely related diurnal relatives, their photoreceptors absorb photons at very low rates in dim light, even during demanding nocturnal visual tasks. To explain this apparent paradox, it is hypothesized that the necessary bridge between retinal signaling and visual behavior is a neural strategy of spatial and temporal summation at a higher level in the visual system. Exactly where in the visual system this summation takes place, and the nature of the neural circuitry that is involved, is currently unknown but provides a promising avenue for future research.
2015-06-01
Visualization, Graph Navigation, Visual Literacy 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 19a. NAME OF...3 2.3. Visual Literacy ...obscured and individual edges that could be traversed before bundled are now completely lost among the bundled edges. 2.3. Visual Literacy Visual
Verbal communication improves laparoscopic team performance.
Shiliang Chang; Waid, Erin; Martinec, Danny V; Bin Zheng; Swanstrom, Lee L
2008-06-01
The impact of verbal communication on laparoscopic team performance was examined. A total of 24 dyad teams, comprised of residents, medical students, and office staff, underwent 2 team tasks using a previously validated bench model. Twelve teams (feedback groups) received instant verbal instruction and feedback on their performance from an instructor which was compared with 12 teams (control groups) with minimal or no verbal feedback. Their performances were both video and audio taped for analysis. Surgical backgrounds were similar between feedback and control groups. Teams with more verbal feedback achieved significantly better task performance (P = .002) compared with the control group with less feedback. Impact of verbal feedback was more pronounced for tasks requiring team cooperation (aiming and navigation) than tasks depending on individual skills (knotting). Verbal communication, especially the instructions and feedback from an experienced instructor, improved team efficiency and performance.
Cheng, Sen; Sabes, Philip N
2007-04-01
The sensorimotor calibration of visually guided reaching changes on a trial-to-trial basis in response to random shifts in the visual feedback of the hand. We show that a simple linear dynamical system is sufficient to model the dynamics of this adaptive process. In this model, an internal variable represents the current state of sensorimotor calibration. Changes in this state are driven by error feedback signals, which consist of the visually perceived reach error, the artificial shift in visual feedback, or both. Subjects correct for > or =20% of the error observed on each movement, despite being unaware of the visual shift. The state of adaptation is also driven by internal dynamics, consisting of a decay back to a baseline state and a "state noise" process. State noise includes any source of variability that directly affects the state of adaptation, such as variability in sensory feedback processing, the computations that drive learning, or the maintenance of the state. This noise is accumulated in the state across trials, creating temporal correlations in the sequence of reach errors. These correlations allow us to distinguish state noise from sensorimotor performance noise, which arises independently on each trial from random fluctuations in the sensorimotor pathway. We show that these two noise sources contribute comparably to the overall magnitude of movement variability. Finally, the dynamics of adaptation measured with random feedback shifts generalizes to the case of constant feedback shifts, allowing for a direct comparison of our results with more traditional blocked-exposure experiments.
Navigation Assistance: A Trade-Off between Wayfinding Support and Configural Learning Support
ERIC Educational Resources Information Center
Munzer, Stefan; Zimmer, Hubert D.; Baus, Jorg
2012-01-01
Current GPS-based mobile navigation assistance systems support wayfinding, but they do not support learning about the spatial configuration of an environment. The present study examined effects of visual presentation modes for navigation assistance on wayfinding accuracy, route learning, and configural learning. Participants (high-school students)…
Clarke, Aaron M.; Herzog, Michael H.; Francis, Gregory
2014-01-01
Experimentalists tend to classify models of visual perception as being either local or global, and involving either feedforward or feedback processing. We argue that these distinctions are not as helpful as they might appear, and we illustrate these issues by analyzing models of visual crowding as an example. Recent studies have argued that crowding cannot be explained by purely local processing, but that instead, global factors such as perceptual grouping are crucial. Theories of perceptual grouping, in turn, often invoke feedback connections as a way to account for their global properties. We examined three types of crowding models that are representative of global processing models, and two of which employ feedback processing: a model based on Fourier filtering, a feedback neural network, and a specific feedback neural architecture that explicitly models perceptual grouping. Simulations demonstrate that crucial empirical findings are not accounted for by any of the models. We conclude that empirical investigations that reject a local or feedforward architecture offer almost no constraints for model construction, as there are an uncountable number of global and feedback systems. We propose that the identification of a system as being local or global and feedforward or feedback is less important than the identification of a system's computational details. Only the latter information can provide constraints on model development and promote quantitative explanations of complex phenomena. PMID:25374554
Evidence for discrete landmark use by pigeons during homing.
Mora, Cordula V; Ross, Jeremy D; Gorsevski, Peter V; Chowdhury, Budhaditya; Bingman, Verner P
2012-10-01
Considerable efforts have been made to investigate how homing pigeons (Columba livia f. domestica) are able to return to their loft from distant, unfamiliar sites while the mechanisms underlying navigation in familiar territory have received less attention. With the recent advent of global positioning system (GPS) data loggers small enough to be carried by pigeons, the role of visual environmental features in guiding navigation over familiar areas is beginning to be understood, yet, surprisingly, we still know very little about whether homing pigeons can rely on discrete, visual landmarks to guide navigation. To assess a possible role of discrete, visual landmarks in navigation, homing pigeons were first trained to home from a site with four wind turbines as salient landmarks as well as from a control site without any distinctive, discrete landmark features. The GPS-recorded flight paths of the pigeons on the last training release were straighter and more similar among birds from the turbine site compared with those from the control site. The pigeons were then released from both sites following a clock-shift manipulation. Vanishing bearings from the turbine site continued to be homeward oriented as 13 of 14 pigeons returned home. By contrast, at the control site the vanishing bearings were deflected in the expected clock-shift direction and only 5 of 13 pigeons returned home. Taken together, our results offer the first strong evidence that discrete, visual landmarks are one source of spatial information homing pigeons can utilize to navigate when flying over a familiar area.
Visual Feedback Dominates the Sense of Agency for Brain-Machine Actions
Evans, Nathan; Gale, Steven; Schurger, Aaron; Blanke, Olaf
2015-01-01
Recent advances in neuroscience and engineering have led to the development of technologies that permit the control of external devices through real-time decoding of brain activity (brain-machine interfaces; BMI). Though the feeling of controlling bodily movements (sense of agency; SOA) has been well studied and a number of well-defined sensorimotor and cognitive mechanisms have been put forth, very little is known about the SOA for BMI-actions. Using an on-line BMI, and verifying that our subjects achieved a reasonable level of control, we sought to describe the SOA for BMI-mediated actions. Our results demonstrate that discrepancies between decoded neural activity and its resultant real-time sensory feedback are associated with a decrease in the SOA, similar to SOA mechanisms proposed for bodily actions. However, if the feedback discrepancy serves to correct a poorly controlled BMI-action, then the SOA can be high and can increase with increasing discrepancy, demonstrating the dominance of visual feedback on the SOA. Taken together, our results suggest that bodily and BMI-actions rely on common mechanisms of sensorimotor integration for agency judgments, but that visual feedback dominates the SOA in the absence of overt bodily movements or proprioceptive feedback, however erroneous the visual feedback may be. PMID:26066840
Visuomotor adaptation needs a validation of prediction error by feedback error
Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle
2014-01-01
The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644
Analysis of a novel device-level SINS/ACFSS deeply integrated navigation method
NASA Astrophysics Data System (ADS)
Zhang, Hao; Qin, Shiqiao; Wang, Xingshu; Jiang, Guangwen; Tan, Wenfeng; Wu, Wei
2017-02-01
The combination of the strap-down inertial navigation system(SINS) and the celestial navigation system(CNS) is one of the popular measures to constitute the integrated navigation system. A star sensor(SS) is used as a precise attitude determination device in CNS. To solve the problem that the star image obtained by SS is motion-blurred under dynamic conditions, the attitude-correlated frames(ACF) approach is presented and the star sensor which works based on ACF approach is named ACFSS. Depending on the ACF approach, a novel device-level SINS/ACFSS deeply integrated navigation method is proposed in this paper. Feedback to the ACF process from the error of the gyro is one of the typical characters of the SINS/CNS deeply integrated navigation method. Herein, simulation results have verified its validity and efficiency in improving the accuracy of gyro and it can be proved that this method is feasible.
Modeling trial by trial and block feedback in perceptual learning
Liu, Jiajuan; Dosher, Barbara; Lu, Zhong-Lin
2014-01-01
Feedback has been shown to play a complex role in visual perceptual learning. It is necessary for performance improvement in some conditions while not others. Different forms of feedback, such as trial-by-trial feedback or block feedback, may both facilitate learning, but with different mechanisms. False feedback can abolish learning. We account for all these results with the Augmented Hebbian Reweight Model (AHRM). Specifically, three major factors in the model advance performance improvement: the external trial-by-trial feedback when available, the self-generated output as an internal feedback when no external feedback is available, and the adaptive criterion control based on the block feedback. Through simulating a comprehensive feedback study (Herzog & Fahle 1997, Vision Research, 37 (15), 2133–2141), we show that the model predictions account for the pattern of learning in seven major feedback conditions. The AHRM can fully explain the complex empirical results on the role of feedback in visual perceptual learning. PMID:24423783
ERIC Educational Resources Information Center
Bieber, Carrie; Gurski, John C.
In an attempt to confirm earlier results with a group of mentally retarded females, 12 mentally retarded institutionalized adults (8 male, 4 female) were trained to either reduce (Loud group) or increase (Soft group) their voice volumes with a combination of visual feedback and token reinforcement. The feedback unit provided a binary light on-off…
ERIC Educational Resources Information Center
Patten, Iomi; Edmonds, Lisa A.
2015-01-01
The present study examines the effects of training native Japanese speakers in the production of American /r/ using spectrographic visual feedback. Within a modified single-subject design, two native Japanese participants produced single words containing /r/ in a variety of positions while viewing live spectrographic feedback with the aim of…
Navigation Constellation Design Using a Multi-Objective Genetic Algorithm
2015-03-26
programs. This specific tool not only offers high fidelity simulations, but it also offers the visual aid provided by STK . The ability to...MATLAB and STK . STK is a program that allows users to model, analyze, and visualize space systems. Users can create objects such as satellites and...position dilution of precision (PDOP) and system cost. This thesis utilized Satellite Tool Kit ( STK ) to calculate PDOP values of navigation
The Impact of Accelerated Promotion Rates on Drill Sergeant Performance
2011-01-01
land navigation, communication (voice/visual), NBC protection). I have good knowledge of most Warrior tasks; I have sufficient skills to handle...but seldom reach out on my own initiative. I communicate and work well with others regardless of background; I encourage attitudes of tolerance and...most of the Warrior tasks (e.g., land navigation, communication (voice/visual), NBC protection). I have good knowledge of most Warrior tasks; I
HierarchicalTopics: visually exploring large text collections using topic hierarchies.
Dou, Wenwen; Yu, Li; Wang, Xiaoyu; Ma, Zhiqiang; Ribarsky, William
2013-12-01
Analyzing large textual collections has become increasingly challenging given the size of the data available and the rate that more data is being generated. Topic-based text summarization methods coupled with interactive visualizations have presented promising approaches to address the challenge of analyzing large text corpora. As the text corpora and vocabulary grow larger, more topics need to be generated in order to capture the meaningful latent themes and nuances in the corpora. However, it is difficult for most of current topic-based visualizations to represent large number of topics without being cluttered or illegible. To facilitate the representation and navigation of a large number of topics, we propose a visual analytics system--HierarchicalTopic (HT). HT integrates a computational algorithm, Topic Rose Tree, with an interactive visual interface. The Topic Rose Tree constructs a topic hierarchy based on a list of topics. The interactive visual interface is designed to present the topic content as well as temporal evolution of topics in a hierarchical fashion. User interactions are provided for users to make changes to the topic hierarchy based on their mental model of the topic space. To qualitatively evaluate HT, we present a case study that showcases how HierarchicalTopics aid expert users in making sense of a large number of topics and discovering interesting patterns of topic groups. We have also conducted a user study to quantitatively evaluate the effect of hierarchical topic structure. The study results reveal that the HT leads to faster identification of large number of relevant topics. We have also solicited user feedback during the experiments and incorporated some suggestions into the current version of HierarchicalTopics.
Boccia, M; Piccardi, L; Palermo, L; Nemmi, F; Sulpizio, V; Galati, G; Guariglia, C
2014-09-05
Visual mental imagery is a process that draws on different cognitive abilities and is affected by the contents of mental images. Several studies have demonstrated that different brain areas subtend the mental imagery of navigational and non-navigational contents. Here, we set out to determine whether there are distinct representations for navigational and geographical images. Specifically, we used a Spatial Compatibility Task (SCT) to assess the mental representation of a familiar navigational space (the campus), a familiar geographical space (the map of Italy) and familiar objects (the clock). Twenty-one participants judged whether the vertical or the horizontal arrangement of items was correct. We found that distinct representational strategies were preferred to solve different categories on the SCT, namely, the horizontal perspective for the campus and the vertical perspective for the clock and the map of Italy. Furthermore, we found significant effects due to individual differences in the vividness of mental images and in preferences for verbal versus visual strategies, which selectively affect the contents of mental images. Our results suggest that imagining a familiar navigational space is somewhat different from imagining a familiar geographical space. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Image navigation as a means to expand the boundaries of fluorescence-guided surgery
NASA Astrophysics Data System (ADS)
Brouwer, Oscar R.; Buckle, Tessa; Bunschoten, Anton; Kuil, Joeri; Vahrmeijer, Alexander L.; Wendler, Thomas; Valdés-Olmos, Renato A.; van der Poel, Henk G.; van Leeuwen, Fijs W. B.
2012-05-01
Hybrid tracers that are both radioactive and fluorescent help extend the use of fluorescence-guided surgery to deeper structures. Such hybrid tracers facilitate preoperative surgical planning using (3D) scintigraphic images and enable synchronous intraoperative radio- and fluorescence guidance. Nevertheless, we previously found that improved orientation during laparoscopic surgery remains desirable. Here we illustrate how intraoperative navigation based on optical tracking of a fluorescence endoscope may help further improve the accuracy of hybrid surgical guidance. After feeding SPECT/CT images with an optical fiducial as a reference target to the navigation system, optical tracking could be used to position the tip of the fluorescence endoscope relative to the preoperative 3D imaging data. This hybrid navigation approach allowed us to accurately identify marker seeds in a phantom setup. The multispectral nature of the fluorescence endoscope enabled stepwise visualization of the two clinically approved fluorescent dyes, fluorescein and indocyanine green. In addition, the approach was used to navigate toward the prostate in a patient undergoing robot-assisted prostatectomy. Navigation of the tracked fluorescence endoscope toward the target identified on SPECT/CT resulted in real-time gradual visualization of the fluorescent signal in the prostate, thus providing an intraoperative confirmation of the navigation accuracy.
Design, Implementation and Evaluation of an Indoor Navigation System for Visually Impaired People
Martinez-Sala, Alejandro Santos; Losilla, Fernando; Sánchez-Aarnoutse, Juan Carlos; García-Haro, Joan
2015-01-01
Indoor navigation is a challenging task for visually impaired people. Although there are guidance systems available for such purposes, they have some drawbacks that hamper their direct application in real-life situations. These systems are either too complex, inaccurate, or require very special conditions (i.e., rare in everyday life) to operate. In this regard, Ultra-Wideband (UWB) technology has been shown to be effective for indoor positioning, providing a high level of accuracy and low installation complexity. This paper presents SUGAR, an indoor navigation system for visually impaired people which uses UWB for positioning, a spatial database of the environment for pathfinding through the application of the A* algorithm, and a guidance module. The interaction with the user takes place using acoustic signals and voice commands played through headphones. The suitability of the system for indoor navigation has been verified by means of a functional and usable prototype through a field test with a blind person. In addition, other tests have been conducted in order to show the accuracy of different relevant parts of the system. PMID:26703610
Mayhew, Stephen D; Porcaro, Camillo; Tecchio, Franca; Bagshaw, Andrew P
2017-03-01
A bilateral visuo-parietal-motor network is responsible for fine control of hand movements. However, the sub-regions which are devoted to maintenance of contraction stability and how these processes fluctuate with trial-quality of task execution and in the presence/absence of visual feedback remains unclear. We addressed this by integrating behavioural and fMRI measurements during right-hand isometric compression of a compliant rubber bulb, at 10% and 30% of maximum voluntary contraction, both with and without visual feedback of the applied force. We quantified single-trial behavioural performance during 1) the whole task period and 2) stable contraction maintenance, and regressed these metrics against the fMRI data to identify the brain activity most relevant to trial-by-trial fluctuations in performance during specific task phases. fMRI-behaviour correlations in a bilateral network of visual, premotor, primary motor, parietal and inferior frontal cortical regions emerged during performance of the entire feedback task, but only in premotor, parietal cortex and thalamus during the stable contraction period. The trials with the best task performance showed increased bilaterality and amplitude of fMRI responses. With feedback, stronger BOLD-behaviour coupling was found during 10% compared to 30% contractions. Only a small subset of regions in this network were weakly correlated with behaviour without feedback, despite wider network activated during this task than in the presence of feedback. These findings reflect a more focused network strongly coupled to behavioural fluctuations when providing visual feedback, whereas without it the task recruited widespread brain activity almost uncoupled from behavioural performance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
The Role of Direct and Visual Force Feedback in Suturing Using a 7-DOF Dual-Arm Teleoperated System.
Talasaz, Ali; Trejos, Ana Luisa; Patel, Rajni V
2017-01-01
The lack of haptic feedback in robotics-assisted surgery can result in tissue damage or accidental tool-tissue hits. This paper focuses on exploring the effect of haptic feedback via direct force reflection and visual presentation of force magnitudes on performance during suturing in robotics-assisted minimally invasive surgery (RAMIS). For this purpose, a haptics-enabled dual-arm master-slave teleoperation system capable of measuring tool-tissue interaction forces in all seven Degrees-of-Freedom (DOFs) was used. Two suturing tasks, tissue puncturing and knot-tightening, were chosen to assess user skills when suturing on phantom tissue. Sixteen subjects participated in the trials and their performance was evaluated from various points of view: force consistency, number of accidental hits with tissue, amount of tissue damage, quality of the suture knot, and the time required to accomplish the task. According to the results, visual force feedback was not very useful during the tissue puncturing task as different users needed different amounts of force depending on the penetration of the needle into the tissue. Direct force feedback, however, was more useful for this task to apply less force and to minimize the amount of damage to the tissue. Statistical results also reveal that both visual and direct force feedback were required for effective knot tightening: direct force feedback could reduce the number of accidental hits with the tissue and also the amount of tissue damage, while visual force feedback could help to securely tighten the suture knots and maintain force consistency among different trials/users. These results provide evidence of the importance of 7-DOF force reflection when performing complex tasks in a RAMIS setting.
McGough, Rian; Paterson, Kade; Bradshaw, Elizabeth J; Bryant, Adam L; Clark, Ross A
2012-01-01
Weight-bearing asymmetry (WBA) may be detrimental to performance and could increase the risk of injury; however, detecting and reducing it is difficult in a field setting. This study assessed whether a portable and simple-to-use system designed with multiple Nintendo Wii Balance Boards (NWBBs) and customized software can be used to evaluate and improve WBA. Fifteen elite Australian Rules Footballers and 32 age-matched, untrained participants were tested for measures of WBA while squatting. The NWBB and customized software provided real-time visual feedback of WBA during half of the trials. Outcome measures included the mean mass difference (MMD) between limbs, interlimb symmetry index (SI), and percentage of time spent favoring a single limb (TFSL). Significant reductions in MMD (p = 0.028) and SI (p = 0.007) with visual feedback were observed for the entire group data. Subgroup analysis revealed significant reductions in MMD (p = 0.047) and SI (p = 0.026) with visual feedback in the untrained sample; however, the reductions in the trained sample were nonsignificant. The trained group showed significantly less WBA for TFSL under both visual conditions (no feedback: p = 0.015, feedback: p = 0.017). Correlation analysis revealed that participants with high levels of WBA had the greatest response to feedback (p < 0.001, ρ = 0.557). In conclusion, WBA exists in healthy untrained adults, and these asymmetries can be reduced using real-time visual feedback provided by an NWBB-based system. Healthy, well-trained professional athletes do not possess the same magnitude of WBA. Inexpensive, portable, and widely available gaming technology may be used to evaluate and improve WBA in clinical and sporting settings.
ERIC Educational Resources Information Center
Feltham, Max G.; Ledebt, Annick; Deconinck, Frederik J. A.; Savelsbergh, Geert J. P.
2010-01-01
The study examined the effects of mirror feedback information on neuromuscular activation during bimanual coordination in eight children with spastic hemiparetic cerebral palsy (SHCP) and a matched control group. The "mirror box" creates a visual illusion, which gives rise to a visual perception of a zero lag, symmetric movement between the two…
Sharma, Vinod; Simpson, Richard; Lopresti, Edmund; Schmeler, Mark
2010-01-01
Some individuals with disabilities are denied powered mobility because they lack the visual, motor, and/or cognitive skills required to safely operate a power wheelchair. The Drive-Safe System (DSS) is an add-on, distributed, shared-control navigation assistance system for power wheelchairs intended to provide safe and independent mobility to such individuals. The DSS is a human-machine system in which the user is responsible for high-level control of the wheelchair, such as choosing the destination, path planning, and basic navigation actions, while the DSS overrides unsafe maneuvers through autonomous collision avoidance, wall following, and door crossing. In this project, the DSS was clinically evaluated in a controlled laboratory with blindfolded, nondisabled individuals. Further, these individuals' performance with the DSS was compared with standard cane use for navigation assistance by people with visual impairments. Results indicate that compared with a cane, the DSS significantly reduced the number of collisions. Users rated the DSS favorably even though they took longer to navigate the same obstacle course than they would have using a standard long cane. Participants experienced less physical demand, effort, and frustration when using the DSS as compared with a cane. These findings suggest that the DSS can be a viable powered mobility solution for wheelchair users with visual impairments.
Tapia, Evelina; Beck, Diane M.
2014-01-01
A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness. PMID:25374548
GeoCrystal: graphic-interactive access to geodata archives
NASA Astrophysics Data System (ADS)
Goebel, Stefan; Haist, Joerg; Jasnoch, Uwe
2002-03-01
Recently there is spent a lot of effort to establish information systems and global infrastructures enabling both data suppliers and users to describe (-> eCommerce, metadata) as well as to find appropriate data. Examples for this are metadata information systems, online-shops or portals for geodata. The main disadvantages of existing approaches are insufficient methods and mechanisms leading users to (e.g. spatial) data archives. This affects aspects concerning usability and personalization in general as well as visual feedback techniques in the different steps of the information retrieval process. Several approaches aim at the improvement of graphical user interfaces by using intuitive metaphors, but only some of them offer 3D interfaces in the form of information landscapes or geographic result scenes in the context of information systems for geodata. This paper presents GeoCrystal, which basic idea is to adopt Venn diagrams to compose complex queries and to visualize search results in a 3D information and navigation space for geodata. These concepts are enhanced with spatial metaphors and 3D information landscapes (library for geodata) wherein users can specify searches for appropriate geodata and are enabled to graphic-interactively communicate with search results (book metaphor).
Exploring the simulation requirements for virtual regional anesthesia training
NASA Astrophysics Data System (ADS)
Charissis, V.; Zimmer, C. R.; Sakellariou, S.; Chan, W.
2010-01-01
This paper presents an investigation towards the simulation requirements for virtual regional anaesthesia training. To this end we have developed a prototype human-computer interface designed to facilitate Virtual Reality (VR) augmenting educational tactics for regional anaesthesia training. The proposed interface system, aims to compliment nerve blocking techniques methods. The system is designed to operate in real-time 3D environment presenting anatomical information and enabling the user to explore the spatial relation of different human parts without any physical constrains. Furthermore the proposed system aims to assist the trainee anaesthetists so as to build a mental, three-dimensional map of the anatomical elements and their depictive relationship to the Ultra-Sound imaging which is used for navigation of the anaesthetic needle. Opting for a sophisticated approach of interaction, the interface elements are based on simplified visual representation of real objects, and can be operated through haptic devices and surround auditory cues. This paper discusses the challenges involved in the HCI design, introduces the visual components of the interface and presents a tentative plan of future work which involves the development of realistic haptic feedback and various regional anaesthesia training scenarios.
Review of Designs for Haptic Data Visualization.
Paneels, Sabrina; Roberts, Jonathan C
2010-01-01
There are many different uses for haptics, such as training medical practitioners, teleoperation, or navigation of virtual environments. This review focuses on haptic methods that display data. The hypothesis is that haptic devices can be used to present information, and consequently, the user gains quantitative, qualitative, or holistic knowledge about the presented data. Not only is this useful for users who are blind or partially sighted (who can feel line graphs, for instance), but also the haptic modality can be used alongside other modalities, to increase the amount of variables being presented, or to duplicate some variables to reinforce the presentation. Over the last 20 years, a significant amount of research has been done in haptic data presentation; e.g., researchers have developed force feedback line graphs, bar charts, and other forms of haptic representations. However, previous research is published in different conferences and journals, with different application emphases. This paper gathers and collates these various designs to provide a comprehensive review of designs for haptic data visualization. The designs are classified by their representation: Charts, Maps, Signs, Networks, Diagrams, Images, and Tables. This review provides a comprehensive reference for researchers and learners, and highlights areas for further research.
NASA Astrophysics Data System (ADS)
Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.
2007-03-01
We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.
Suemitsu, Atsuo; Dang, Jianwu; Ito, Takayuki; Tiede, Mark
2015-10-01
Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.
Performance analysis of device-level SINS/ACFSS deeply integrated navigation method
NASA Astrophysics Data System (ADS)
Zhang, Hao; Qin, Shiqiao; Wang, Xingshu; Jiang, Guangwen; Tan, Wenfeng
2016-10-01
The Strap-Down Inertial Navigation System (SINS) is a widely used navigation system. The combination of SINS and the Celestial Navigation System (CNS) is one of the popular measures to constitute the integrated navigation system. A Star Sensor (SS) is used as a precise attitude determination device in CNS. To solve the problem that the star image obtained by SS under dynamic conditions is motion-blurred, the Attitude Correlated Frames (ACF) is presented and the star sensor which works based on ACF approach is named ACFSS. Depending on the ACF approach, a novel device-level SINS/ACFSS deeply integrated navigation method is proposed in this paper. Feedback to the ACF process from the error of the gyro is one of the typical characters of the SINS/CNS deeply integrated navigation method. Herein, simulation results have verified its validity and efficiency in improving the accuracy of gyro and it can be proved that this method is feasible in theory.
Effect of Real-Time Feedback on Screw Placement Into Synthetic Cancellous Bone.
Gustafson, Peter A; Geeslin, Andrew G; Prior, David M; Chess, Joseph L
2016-08-01
The objective of this study is to evaluate whether real-time torque feedback may reduce the occurrence of stripping when inserting nonlocking screws through fracture plates into synthetic cancellous bone. Five attending orthopaedic surgeons and 5 senior level orthopaedic residents inserted 8 screws in each phase. In phase I, screws were inserted without feedback simulating conventional techniques. In phase II, screws were driven with visual torque feedback. In phase III, screws were again inserted with conventional techniques. Comparison of these 3 phases with respect to screw insertion torque, surgeon rank, and perception of stripping was used to establish the effects of feedback. Seventy-three of 239 screws resulted in stripping. During the first phase, no feedback was provided and the overall strip rate was 41.8%; this decreased to 15% with visual feedback (P < 0.001) and returned to 35% when repeated without feedback. With feedback, a lower average torque was applied over a narrower torque distribution. Residents stripped 40.8% of screws compared with 20.2% for attending surgeons. Surgeons were poor at perceiving whether they stripped. Prevention and identification of stripping is influenced by surgeon perception of tactile sensation. This is significantly improved with utilization of real-time visual feedback of a torque versus roll curve. This concept of real-time feedback seems beneficial toward performance in synthetic cancellous bone and may lead to improved fixation in cancellous bone in a surgical setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamura, Katsumasa; Shioyama, Yoshiyuki; Nomoto, Satoru
2007-05-01
Purpose: The voluntary breath-hold (BH) technique is a simple method to control the respiration-related motion of a tumor during irradiation. However, the abdominal and chest wall position may not be accurately reproduced using the BH technique. The purpose of this study was to examine whether visual feedback can reduce the fluctuation in wall motion during BH using a new respiratory monitoring device. Methods and Materials: We developed a laser-based BH monitoring and visual feedback system. For this study, five healthy volunteers were enrolled. The volunteers, practicing abdominal breathing, performed shallow end-expiration BH (SEBH), shallow end-inspiration BH (SIBH), and deep end-inspirationmore » BH (DIBH) with or without visual feedback. The abdominal and chest wall positions were measured at 80-ms intervals during BHs. Results: The fluctuation in the chest wall position was smaller than that of the abdominal wall position. The reproducibility of the wall position was improved by visual feedback. With a monitoring device, visual feedback reduced the mean deviation of the abdominal wall from 2.1 {+-} 1.3 mm to 1.5 {+-} 0.5 mm, 2.5 {+-} 1.9 mm to 1.1 {+-} 0.4 mm, and 6.6 {+-} 2.4 mm to 2.6 {+-} 1.4 mm in SEBH, SIBH, and DIBH, respectively. Conclusions: Volunteers can perform the BH maneuver in a highly reproducible fashion when informed about the position of the wall, although in the case of DIBH, the deviation in the wall position remained substantial.« less
Memory-guided force control in healthy younger and older adults.
Neely, Kristina A; Samimy, Shaadee; Blouch, Samantha L; Wang, Peiyuan; Chennavasin, Amanda; Diaz, Michele T; Dennis, Nancy A
2017-08-01
Successful performance of a memory-guided motor task requires participants to store and then recall an accurate representation of the motor goal. Further, participants must monitor motor output to make adjustments in the absence of visual feedback. The goal of this study was to examine memory-guided grip force in healthy younger and older adults and compare it to performance on behavioral tasks of working memory. Previous work demonstrates that healthy adults decrease force output as a function of time when visual feedback is not available. We hypothesized that older adults would decrease force output at a faster rate than younger adults, due to age-related deficits in working memory. Two groups of participants, younger adults (YA: N = 32, mean age 21.5 years) and older adults (OA: N = 33, mean age 69.3 years), completed four 20-s trials of isometric force with their index finger and thumb, equal to 25% of their maximum voluntary contraction. In the full-vision condition, visual feedback was available for the duration of the trial. In the no vision condition, visual feedback was removed for the last 12 s of each trial. Participants were asked to maintain constant force output in the absence of visual feedback. Participants also completed tasks of word recall and recognition and visuospatial working memory. Counter to our predictions, when visual feedback was removed, younger adults decreased force at a faster rate compared to older adults and the rate of decay was not associated with behavioral performance on tests of working memory.
Underwater and surface behavior of homing juvenile northern elephant seals.
Matsumura, Moe; Watanabe, Yuuki Y; Robinson, Patrick W; Miller, Patrick J O; Costa, Daniel P; Miyazaki, Nobuyuki
2011-02-15
Northern elephant seals, Mirounga angustirostris, travel between colonies along the west coast of North America and foraging areas in the North Pacific. They also have the ability to return to their home colony after being experimentally translocated. However, the mechanisms of this navigation are not known. Visual information could serve an important role in navigation, either primary or supplementary. We examined the role of visual cues in elephant seal navigation by translocating three seals and recording their heading direction continuously using GPS, and acceleration and geomagnetic data loggers while they returned to the colony. The seals first reached the coast and then proceeded to the colony by swimming along the coast. While underwater the animals exhibited a horizontally straight course (mean net-to-gross displacement ratio=0.94±0.02). In contrast, while at the surface they changed their headings up to 360 deg. These results are consistent with the use of visual cues for navigation to the colony. The seals may visually orient by using landmarks as they swim along the coast. We further assessed whether the seals could maintain a consistent heading while underwater during drift dives where one might expect that passive spiraling during drift dives could cause disorientation. However, seals were able to maintain the initial course heading even while underwater during drift dives where there was spiral motion (to within 20 deg). This behavior may imply the use of non-visual cues such as acoustic signals or magnetic fields for underwater orientation.
Stimulus change as a factor in response maintenance with free food available.
Osborne, S R; Shelby, M
1975-01-01
Rats bar pressed for food on a reinforcement schedule in which every response was reinforced, even though a dish of pellets was present. Initially, auditory and visual stimuli accompanied response-produced food presentation. With stimulus feedback as an added consequence of bar pressing, responding was maintained in the presence of free food; without stimulus feedback, responding decreased to a low level. Auditory feedback maintained slightly more responding than did visual feedback, and both together maintained more responding than did either separately. Almost no responding occurred when the only consequence of bar pressing was stimulus feedback. The data indicated conditioned and sensory reinforcement effects of response-produced stimulus feedback. PMID:1202121
Selective Activation of the Deep Layers of the Human Primary Visual Cortex by Top-Down Feedback.
Kok, Peter; Bains, Lauren J; van Mourik, Tim; Norris, David G; de Lange, Floris P
2016-02-08
In addition to bottom-up input, the visual cortex receives large amounts of feedback from other cortical areas [1-3]. One compelling example of feedback activation of early visual neurons in the absence of bottom-up input occurs during the famous Kanizsa illusion, where a triangular shape is perceived, even in regions of the image where there is no bottom-up visual evidence for it. This illusion increases the firing activity of neurons in the primary visual cortex with a receptive field on the illusory contour [4]. Feedback signals are largely segregated from feedforward signals within each cortical area, with feedforward signals arriving in the middle layer, while top-down feedback avoids the middle layers and predominantly targets deep and superficial layers [1, 2, 5, 6]. Therefore, the feedback-mediated activity increase in V1 during the perception of illusory shapes should lead to a specific laminar activity profile that is distinct from the activity elicited by bottom-up stimulation. Here, we used fMRI at high field (7 T) to empirically test this hypothesis, by probing the cortical response to illusory figures in human V1 at different cortical depths [7-14]. We found that, whereas bottom-up stimulation activated all cortical layers, feedback activity induced by illusory figures led to a selective activation of the deep layers of V1. These results demonstrate the potential for non-invasive recordings of neural activity with laminar specificity in humans and elucidate the role of top-down signals during perceptual processing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sklar, A E; Sarter, N B
1999-12-01
Observed breakdowns in human-machine communication can be explained, in part, by the nature of current automation feedback, which relies heavily on focal visual attention. Such feedback is not well suited for capturing attention in case of unexpected changes and events or for supporting the parallel processing of large amounts of data in complex domains. As suggested by multiple-resource theory, one possible solution to this problem is to distribute information across various sensory modalities. A simulator study was conducted to compare the effectiveness of visual, tactile, and redundant visual and tactile cues for indicating unexpected changes in the status of an automated cockpit system. Both tactile conditions resulted in higher detection rates for, and faster response times to, uncommanded mode transitions. Tactile feedback did not interfere with, nor was its effectiveness affected by, the performance of concurrent visual tasks. The observed improvement in task-sharing performance indicates that the introduction of tactile feedback is a promising avenue toward better supporting human-machine communication in event-driven, information-rich domains.
Game-Based Augmented Visual Feedback for Enlarging Speech Movements in Parkinson's Disease.
Yunusova, Yana; Kearney, Elaine; Kulkarni, Madhura; Haworth, Brandon; Baljko, Melanie; Faloutsos, Petros
2017-06-22
The purpose of this pilot study was to demonstrate the effect of augmented visual feedback on acquisition and short-term retention of a relatively simple instruction to increase movement amplitude during speaking tasks in patients with dysarthria due to Parkinson's disease (PD). Nine patients diagnosed with PD, hypokinetic dysarthria, and impaired speech intelligibility participated in a training program aimed at increasing the size of their articulatory (tongue) movements during sentences. Two sessions were conducted: a baseline and training session, followed by a retention session 48 hr later. At baseline, sentences were produced at normal, loud, and clear speaking conditions. Game-based visual feedback regarding the size of the articulatory working space (AWS) was presented during training. Eight of nine participants benefited from training, increasing their sentence AWS to a greater degree following feedback as compared with the baseline loud and clear conditions. The majority of participants were able to demonstrate the learned skill at the retention session. This study demonstrated the feasibility of augmented visual feedback via articulatory kinematics for training movement enlargement in patients with hypokinesia due to PD. https://doi.org/10.23641/asha.5116840.
2002-01-01
feedback signals were derived from the motion of the platform rather than directly measured, though an actual spacecraft would likely utilize... large position error spikes due to target motion reversal. Of course, these tracking errors are highly dependent on the feedback gains chosen for the...Key Words: MQW Retromodulators, Modulating Retroreflector(s),Inter- spacecraft communications and navigation, space control
Object Persistence Enhances Spatial Navigation: A Case Study in Smartphone Vision Science.
Liverence, Brandon M; Scholl, Brian J
2015-07-01
Violations of spatiotemporal continuity disrupt performance in many tasks involving attention and working memory, but experiments on this topic have been limited to the study of moment-by-moment on-line perception, typically assessed by passive monitoring tasks. We tested whether persisting object representations also serve as underlying units of longer-term memory and active spatial navigation, using a novel paradigm inspired by the visual interfaces common to many smartphones. Participants used key presses to navigate through simple visual environments consisting of grids of icons (depicting real-world objects), only one of which was visible at a time through a static virtual window. Participants found target icons faster when navigation involved persistence cues (via sliding animations) than when persistence was disrupted (e.g., via temporally matched fading animations), with all transitions inspired by smartphone interfaces. Moreover, this difference occurred even after explicit memorization of the relevant information, which demonstrates that object persistence enhances spatial navigation in an automatic and irresistible fashion. © The Author(s) 2015.
INSIGHT: RFID and Bluetooth enabled automated space for the blind and visually impaired.
Ganz, Aura; Gandhi, Siddhesh Rajan; Wilson, Carole; Mullett, Gary
2010-01-01
In this paper we introduce INSIGHT, an indoor location tracking and navigation system to help the blind and visually impaired to easily navigate to their chosen destination in a public building. INSIGHT makes use of RFID and Bluetooth technology deployed within the building to locate and track the users. The PDA based user device interacts with INSIGHT server and provides the user navigation instructions in an audio form. The proposed system provides multi-resolution localization of the users, facilitating the provision of accurate navigation instructions when the user is in the vicinity of the RFID tags as well as accommodating a PANIC button which provides navigation instructions when the user is anywhere in the building. Moreover, the system will continuously monitor the zone in which the user walks. This will enable the system to identify if the user is located in the wrong zone of the building which may not lead to the desired destination.
Sundvall, Erik; Nyström, Mikael; Forss, Mattias; Chen, Rong; Petersson, Håkan; Ahlfeldt, Hans
2007-01-01
This paper describes selected earlier approaches to graphically relating events to each other and to time; some new combinations are also suggested. These are then combined into a unified prototyping environment for visualization and navigation of electronic health records. Google Earth (GE) is used for handling display and interaction of clinical information stored using openEHR data structures and 'archetypes'. The strength of the approach comes from GE's sophisticated handling of detail levels, from coarse overviews to fine-grained details that has been combined with linear, polar and region-based views of clinical events related to time. The system should be easy to learn since all the visualization styles can use the same navigation. The structured and multifaceted approach to handling time that is possible with archetyped openEHR data lends itself well to visualizing and integration with openEHR components is provided in the environment.
Takao, Masaki; Nishii, Takashi; Sakai, Takashi; Sugano, Nobuhiko
2014-06-01
Anterior sacroiliac joint plate fixation for unstable pelvic ring fractures avoids soft tissue problems in the buttocks; however, the lumbosacral nerves lie in close proximity to the sacroiliac joint and may be injured during the procedure. A 49 year-old woman with a type C pelvic ring fracture was treated with an anterior sacroiliac plate using a computed tomography (CT)-three-dimensional (3D)-fluoroscopy matching navigation system, which visualized the lumbosacral nerves as well as the iliac and sacral bones. We used a flat panel detector 3D C-arm, which made it possible to superimpose our preoperative CT-based plan on the intra-operative 3D-fluoroscopic images. No postoperative complications were noted. Intra-operative lumbosacral nerve visualization using computer navigation was useful to recognize the 'at-risk' area for nerve injury during anterior sacroiliac plate fixation. Copyright © 2013 John Wiley & Sons, Ltd.
Liau, Ee Shan; Yen, Ya-Ping; Chen, Jun-An
2018-05-11
Spinal motor neurons (MNs) extend their axons to communicate with their innervating targets, thereby controlling movement and complex tasks in vertebrates. Thus, it is critical to uncover the molecular mechanisms of how motor axons navigate to, arborize, and innervate their peripheral muscle targets during development and degeneration. Although transgenic Hb9::GFP mouse lines have long served to visualize motor axon trajectories during embryonic development, detailed descriptions of the full spectrum of axon terminal arborization remain incomplete due to the pattern complexity and limitations of current optical microscopy. Here, we describe an improved protocol that combines light sheet fluorescence microscopy (LSFM) and robust image analysis to qualitatively and quantitatively visualize developing motor axons. This system can be easily adopted to cross genetic mutants or MN disease models with Hb9::GFP lines, revealing novel molecular mechanisms that lead to defects in motor axon navigation and arborization.
Effect of Concurrent Visual Feedback Frequency on Postural Control Learning in Adolescents.
Marco-Ahulló, Adrià; Sánchez-Tormo, Alexis; García-Pérez, José A; Villarrasa-Sapiña, Israel; González, Luis M; García-Massó, Xavier
2018-04-13
The purpose was to find better augmented visual feedback frequency (100% or 67%) for learning a balance task in adolescents. Thirty subjects were divided randomly into a control group, and 100% and 67% feedback groups. The three groups performed pretest (3 trials), practice (12 trials), posttest (3 trials) and retention (3 trials, 24 hours later). The reduced feedback group showed lower RMS in the posttest than in the pretest (p = 0.04). The control and reduced feedback groups showed significant lower median frequency in the posttest than in the pretest (p < 0.05). Both feedback groups showed lower values in retention than in the pretest (p < 0.05). Even when the effect of feedback frequency could not be detected in motor learning, 67% of the feedback was recommended for motor adaptation.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P.
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394
Project Management Using Modern Guidance, Navigation and Control Theory
NASA Technical Reports Server (NTRS)
Hill, Terry
2010-01-01
The idea of control theory and its application to project management is not new, however literature on the topic and real-world applications is not as readily available and comprehensive in how all the principals of Guidance, Navigation and Control (GN&C) apply. This paper will address how the fundamental principals of modern GN&C Theory have been applied to NASA's Constellation Space Suit project and the results in the ability to manage the project within cost, schedule and budget. A s with physical systems, projects can be modeled and managed with the same guiding principles of GN&C as if it were a complex vehicle, system or software with time-varying processes, at times non-linear responses, multiple data inputs of varying accuracy and a range of operating points. With such systems the classic approach could be applied to small and well-defined projects; however with larger, multi-year projects involving multiple organizational structures, external influences and a multitude of diverse resources, then modern control theory is required to model and control the project. The fundamental principals of G N&C stated that a system is comprised of these basic core concepts: State, Behavior, Control system, Navigation system, Guidance and Planning Logic, Feedback systems. The state of a system is a definition of the aspects of the dynamics of the system that can change, such as position, velocity, acceleration, coordinate-based attitude, temperature, etc. The behavior of the system is more of what changes are possible rather than what can change, which is captured in the state of the system. The behavior of a system is captured in the system modeling and if properly done, will aid in accurate system performance prediction in the future. The Control system understands the state and behavior of the system and feedback systems to adjust the control inputs into the system. The Navigation system takes the multiple data inputs and based upon a priori knowledge of the input, will develop a statistical-based weighting of the input to determine where the system currently is located. Guidance and Planning logic of the system with the understanding of where it is (provided by the navigation system) will in turn determine where it needs to be and how to get there. Lastly, the system Feedback system is the right arm of the control system to allow it to affect change in the overall system and therefore it is critical to not only correctly identify the system feedback inputs but also the system response to the feedback inputs. And with any systems project it is critical that the objective of the system be clearly defined for not only planning but to be used to measure performance and to aid in the guidance of the system or project.
How Do Batters Use Visual, Auditory, and Tactile Information about the Success of a Baseball Swing?
ERIC Educational Resources Information Center
Gray, Rob
2009-01-01
Bat/ball contact produces visual (the ball leaving the bat), auditory (the "crack" of the bat), and tactile (bat vibration) feedback about the success of the swing. We used a batting simulation to investigate how college baseball players use visual, tactile, and auditory feedback. In Experiment 1, swing accuracy (i.e., the lateral separation…
NASA Astrophysics Data System (ADS)
Long, Junjiajia; Zucker, Steven W.; Emonet, Thierry
The capability to navigate environmental gradients is of critical importance for survival. Countless organisms (microbes, human cells, worms, larvae, and insects) as well as human-made robots use a run-and-tumble strategy to do so. The classical drawback of this approach is that runs in the wrong direction are wasteful. We show analytically that organisms can overcome this fundamental limitation by exploiting the non-normal dynamics and intrinsic nonlinearities inherent to the positive feedback between motion and sensation. Most importantly, this nonlinear amplification is asymmetric, elongating runs in favorable directions and abbreviating others. The result is a ``ratchet-like'' gradient climbing behavior with drift speeds that can approach half the maximum run speed of the organism. By extending the theoretical study of run-and-tumble navigation into the non-mean-field, nonlinear, and non-normal domains, our results provide a new level of understanding about this basic strategy. We thank Yale HPC, NIGMS 1R01GM106189, and the Allen Distinguished Investigator Program through The Paul G. Allen Frontiers Group for support.
Dagnino, Bruno; Gariel-Mathis, Marie-Alice
2014-01-01
Previous transcranial magnetic stimulation (TMS) studies suggested that feedback from higher to lower areas of the visual cortex is important for the access of visual information to awareness. However, the influence of cortico-cortical feedback on awareness and the nature of the feedback effects are not yet completely understood. In the present study, we used electrical microstimulation in the visual cortex of monkeys to test the hypothesis that cortico-cortical feedback plays a role in visual awareness. We investigated the interactions between the primary visual cortex (V1) and area V4 by applying microstimulation in both cortical areas at various delays. We report that the monkeys detected the phosphenes produced by V1 microstimulation but subthreshold V4 microstimulation did not influence V1 phosphene detection thresholds. A second experiment examined the influence of V4 microstimulation on the monkeys' ability to detect the dimming of one of three peripheral visual stimuli. Again, microstimulation of a group of V4 neurons failed to modulate the monkeys' perception of a stimulus in their receptive field. We conclude that conditions exist where microstimulation of area V4 has only a limited influence on visual perception. PMID:25392172
Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Roelfsema, Pieter R
2015-02-01
Previous transcranial magnetic stimulation (TMS) studies suggested that feedback from higher to lower areas of the visual cortex is important for the access of visual information to awareness. However, the influence of cortico-cortical feedback on awareness and the nature of the feedback effects are not yet completely understood. In the present study, we used electrical microstimulation in the visual cortex of monkeys to test the hypothesis that cortico-cortical feedback plays a role in visual awareness. We investigated the interactions between the primary visual cortex (V1) and area V4 by applying microstimulation in both cortical areas at various delays. We report that the monkeys detected the phosphenes produced by V1 microstimulation but subthreshold V4 microstimulation did not influence V1 phosphene detection thresholds. A second experiment examined the influence of V4 microstimulation on the monkeys' ability to detect the dimming of one of three peripheral visual stimuli. Again, microstimulation of a group of V4 neurons failed to modulate the monkeys' perception of a stimulus in their receptive field. We conclude that conditions exist where microstimulation of area V4 has only a limited influence on visual perception. Copyright © 2015 the American Physiological Society.
Slavin, M J; Phillips, J G; Bradshaw, J L; Hall, K A; Presnell, I
1999-01-01
Patients with dementia of the Alzheimer's type (DAT) and their matched controls wrote, on a computer graphics tablet, 4 consecutive, cursive letter 'l's, with varying levels of visual feedback: noninking pen and blank paper so that only the hand movements could be seen, noninking pen and lined paper to constrain their writing, goggles to occlude the lower visual field and eliminate all relevant visual feedback, and inking pen with full vision. The kinematic measures of stroke length, duration, and peak velocity were expressed in terms of consistency via a signal-to-noise ratio (M value of each parameter divided by its SD). Irrespective of medication or severity, DAT patients had writing strokes of significantly less consistent lengths than controls', and were disproportionately impaired by reduced visual feedback. Again irrespective of medication or severity, patients' strokes were of significantly less consistent duration, and significantly less consistent peak velocity than controls', independent of feedback conditions. Patients, unlike controls, frequently perseverated, producing more than 4 'l's, or multiple sets of responses, which was not differentially affected by level of visual feedback. The more variable performance of patients supports a degradation of the base motor program, and resembles that of Huntington's rather than Parkinson's disease patients. It may indeed reflect frontal rather than basal ganglia dysfunction.
A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes
ERIC Educational Resources Information Center
Browning, N. Andrew; Grossberg, Stephen; Mingolla, Ennio
2009-01-01
Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard…
ERIC Educational Resources Information Center
Firat, Mehmet; Kabakci, Isil
2010-01-01
The interactional feature of hypermedia that allows high-level student-control is considered as one of the most important advantages that hypermedia provides for learning and teaching. However, high-level student control in hypermedia might not always lead to high-level learning performance. The learner is likely to experience navigation problems…
The Use of Visual Feedback During Signing: Evidence From Signers With Impaired Vision
Korpics, Franco; Petronio, Karen
2009-01-01
The role of visual feedback during the production of American Sign Language was investigated by comparing the size of signing space during conversations and narrative monologues for normally sighted signers, signers with tunnel vision due to Usher syndrome, and functionally blind signers. The interlocutor for all groups was a normally sighted deaf person. Signers with tunnel vision produced a greater proportion of signs near the face than blind and normally sighted signers, who did not differ from each other. Both groups of visually impaired signers produced signs within a smaller signing space for conversations than for monologues, but we hypothesize that they did so for different reasons. Signers with tunnel vision may align their signing space with that of their interlocutor. In contrast, blind signers may enhance proprioceptive feedback by producing signs within an enlarged signing space for monologues, which do not require switching between tactile and visual signing. Overall, we hypothesize that signers use visual feedback to phonetically calibrate the dimensions of signing space, rather than to monitor language output. PMID:18495656
The use of visual feedback during signing: evidence from signers with impaired vision.
Emmorey, Karen; Korpics, Franco; Petronio, Karen
2009-01-01
The role of visual feedback during the production of American Sign Language was investigated by comparing the size of signing space during conversations and narrative monologues for normally sighted signers, signers with tunnel vision due to Usher syndrome, and functionally blind signers. The interlocutor for all groups was a normally sighted deaf person. Signers with tunnel vision produced a greater proportion of signs near the face than blind and normally sighted signers, who did not differ from each other. Both groups of visually impaired signers produced signs within a smaller signing space for conversations than for monologues, but we hypothesize that they did so for different reasons. Signers with tunnel vision may align their signing space with that of their interlocutor. In contrast, blind signers may enhance proprioceptive feedback by producing signs within an enlarged signing space for monologues, which do not require switching between tactile and visual signing. Overall, we hypothesize that signers use visual feedback to phonetically calibrate the dimensions of signing space, rather than to monitor language output.
Independent Deficits of Visual Word and Motion Processing in Aging and Early Alzheimer's Disease
Velarde, Carla; Perelstein, Elizabeth; Ressmann, Wendy; Duffy, Charles J.
2013-01-01
We tested whether visual processing impairments in aging and Alzheimer's disease (AD) reflect uniform posterior cortical decline, or independent disorders of visual processing for reading and navigation. Young and older normal controls were compared to early AD patients using psychophysical measures of visual word and motion processing. We find elevated perceptual thresholds for letters and word discrimination from young normal controls, to older normal controls, to early AD patients. Across subject groups, visual motion processing showed a similar pattern of increasing thresholds, with the greatest impact on radial pattern motion perception. Combined analyses show that letter, word, and motion processing impairments are independent of each other. Aging and AD may be accompanied by independent impairments of visual processing for reading and navigation. This suggests separate underlying disorders and highlights the need for comprehensive evaluations to detect early deficits. PMID:22647256
Real-time visual mosaicking and navigation on the seafloor
NASA Astrophysics Data System (ADS)
Richmond, Kristof
Remote robotic exploration holds vast potential for gaining knowledge about extreme environments accessible to humans only with great difficulty. Robotic explorers have been sent to other solar system bodies, and on this planet into inaccessible areas such as caves and volcanoes. In fact, the largest unexplored land area on earth lies hidden in the airless cold and intense pressure of the ocean depths. Exploration in the oceans is further hindered by water's high absorption of electromagnetic radiation, which both inhibits remote sensing from the surface, and limits communications with the bottom. The Earth's oceans thus provide an attractive target for developing remote exploration capabilities. As a result, numerous robotic vehicles now routinely survey this environment, from remotely operated vehicles piloted over tethers from the surface to torpedo-shaped autonomous underwater vehicles surveying the mid-waters. However, these vehicles are limited in their ability to navigate relative to their environment. This limits their ability to return to sites with precision without the use of external navigation aids, and to maneuver near and interact with objects autonomously in the water and on the sea floor. The enabling of environment-relative positioning on fully autonomous underwater vehicles will greatly extend their power and utility for remote exploration in the furthest reaches of the Earth's waters---even under ice and under ground---and eventually in extraterrestrial liquid environments such as Europa's oceans. This thesis presents an operational, fielded system for visual navigation of underwater robotic vehicles in unexplored areas of the seafloor. The system does not depend on external sensing systems, using only instruments on board the vehicle. As an area is explored, a camera is used to capture images and a composite view, or visual mosaic, of the ocean bottom is created in real time. Side-to-side visual registration of images is combined with dead-reckoned navigation information in a framework allowing the creation and updating of large, locally consistent mosaics. These mosaics are used as maps in which the vehicle can navigate and localize itself with respect to points in the environment. The system achieves real-time performance in several ways. First, wherever possible, direct sensing of motion parameters is used in place of extracting them from visual data. Second, trajectories are chosen to enable a hierarchical search for side-to-side links which limits the amount of searching performed without sacrificing robustness. Finally, the map estimation is formulated as a sparse, linear information filter allowing rapid updating of large maps. The visual navigation enabled by the work in this thesis represents a new capability for remotely operated vehicles, and an enabling capability for a new generation of autonomous vehicles which explore and interact with remote, unknown and unstructured underwater environments. The real-time mosaic can be used on current tethered vehicles to create pilot aids and provide a vehicle user with situational awareness of the local environment and the position of the vehicle within it. For autonomous vehicles, the visual navigation system enables precise environment-relative positioning and mapping, without requiring external navigation systems, opening the way for ever-expanding autonomous exploration capabilities. The utility of this system was demonstrated in the field at sites of scientific interest using the ROVs Ventana and Tiburon operated by the Monterey Bay Aquarium Research Institute. A number of sites in and around Monterey Bay, California were mosaicked using the system, culminating in a complete imaging of the wreck site of the USS Macon , where real-time visual mosaics containing thousands of images were generated while navigating using only sensor systems on board the vehicle.
Murray, Trevor; Zeil, Jochen
2017-01-01
Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its 'catchment area') has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the 'catchment volumes' within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots.
Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes
Zeil, Jochen
2017-01-01
Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its ‘catchment area’) has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the ‘catchment volumes’ within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots. PMID:29088300
Proprioceptive feedback determines visuomotor gain in Drosophila
Bartussek, Jan; Lehmann, Fritz-Olaf
2016-01-01
Multisensory integration is a prerequisite for effective locomotor control in most animals. Especially, the impressive aerial performance of insects relies on rapid and precise integration of multiple sensory modalities that provide feedback on different time scales. In flies, continuous visual signalling from the compound eyes is fused with phasic proprioceptive feedback to ensure precise neural activation of wing steering muscles (WSM) within narrow temporal phase bands of the stroke cycle. This phase-locked activation relies on mechanoreceptors distributed over wings and gyroscopic halteres. Here we investigate visual steering performance of tethered flying fruit flies with reduced haltere and wing feedback signalling. Using a flight simulator, we evaluated visual object fixation behaviour, optomotor altitude control and saccadic escape reflexes. The behavioural assays show an antagonistic effect of wing and haltere signalling on visuomotor gain during flight. Compared with controls, suppression of haltere feedback attenuates while suppression of wing feedback enhances the animal’s wing steering range. Our results suggest that the generation of motor commands owing to visual perception is dynamically controlled by proprioception. We outline a potential physiological mechanism based on the biomechanical properties of WSM and sensory integration processes at the level of motoneurons. Collectively, the findings contribute to our general understanding how moving animals integrate sensory information with dynamically changing temporal structure. PMID:26909184
Modern Inertial and Satellite Navigation Systems
1994-05-02
rotor spins, the harder it is to disturb it. This technique is called spin stabilization and it is commonly used for communication satellites. Moder... using a generalization of the complex number called the quaternion . Modem Inertial and Satellite Navigation Systems page 32. 4.2 Exdrson in Pincile...length by an integer. Positive feedback arises from the use of a lasing medium, a gas, liquid, crystal ions, or any of a number of other possibilities
Poon, Cynthia; Chin-Cottongim, Lisa G.; Coombes, Stephen A.; Corcos, Daniel M.
2012-01-01
It is well established that the prefrontal cortex is involved during memory-guided tasks whereas visually guided tasks are controlled in part by a frontal-parietal network. However, the nature of the transition from visually guided to memory-guided force control is not as well established. As such, this study examines the spatiotemporal pattern of brain activity that occurs during the transition from visually guided to memory-guided force control. We measured 128-channel scalp electroencephalography (EEG) in healthy individuals while they performed a grip force task. After visual feedback was removed, the first significant change in event-related activity occurred in the left central region by 300 ms, followed by changes in prefrontal cortex by 400 ms. Low-resolution electromagnetic tomography (LORETA) was used to localize the strongest activity to the left ventral premotor cortex and ventral prefrontal cortex. A second experiment altered visual feedback gain but did not require memory. In contrast to memory-guided force control, altering visual feedback gain did not lead to early changes in the left central and midline prefrontal regions. Decreasing the spatial amplitude of visual feedback did lead to changes in the midline central region by 300 ms, followed by changes in occipital activity by 400 ms. The findings show that subjects rely on sensorimotor memory processes involving left ventral premotor cortex and ventral prefrontal cortex after the immediate transition from visually guided to memory-guided force control. PMID:22696535
Kim, Aram; Kretch, Kari S; Zhou, Zixuan; Finley, James M
2018-05-09
Successful negotiation of obstacles during walking relies on the integration of visual information about the environment with ongoing locomotor commands. When information about the body and environment are removed through occlusion of the lower visual field, individuals increase downward head pitch angle, reduce foot placement precision, and increase safety margins during crossing. However, whether these effects are mediated by loss of visual information about the lower extremities, the obstacle, or both remains to be seen. Here, we used a fully immersive, virtual obstacle negotiation task to investigate how visual information about the lower extremities is integrated with information about the environment to facilitate skillful obstacle negotiation. Participants stepped over virtual obstacles while walking on a treadmill with one of three types of visual feedback about the lower extremities: no feedback, end-point feedback, or a link-segment model. We found that absence of visual information about the lower extremities led to an increase in the variability of leading foot placement after crossing. The presence of a visual representation of the lower extremities promoted greater downward head pitch angle during the approach to and subsequent crossing of an obstacle. In addition, having greater downward head pitch was associated with closer placement of the trailing foot to the obstacle, further placement of the leading foot after the obstacle, and higher trailing foot clearance. These results demonstrate that the fidelity of visual information about the lower extremities influences both feed-forward and feedback aspects of visuomotor coordination during obstacle negotiation.
Raaben, Marco; Holtslag, Herman R; Leenen, Luke P H; Augustine, Robin; Blokhuis, Taco J
2018-01-01
Individuals with lower extremity fractures are often instructed on how much weight to bear on the affected extremity. Previous studies have shown limited therapy compliance in weight bearing during rehabilitation. In this study we investigated the effect of real-time visual biofeedback on weight bearing in individuals with lower extremity fractures in two conditions: full weight bearing and touch-down weight bearing. 11 participants with full weight bearing and 12 participants with touch-down weight bearing after lower extremity fractures have been measured with an ambulatory biofeedback system. The participants first walked 15m and the biofeedback system was only used to register the weight bearing. The same protocol was then repeated with real-time visual feedback during weight bearing. The participants could thereby adapt their loading to the desired level and improve therapy compliance. In participants with full weight bearing, real-time visual biofeedback resulted in a significant increase in loading from 50.9±7.51% bodyweight (BW) without feedback to 63.2±6.74%BW with feedback (P=0.0016). In participants with touch-down weight bearing, the exerted lower extremity load decreased from 16.7±9.77kg without feedback to 10.27±4.56kg with feedback (P=0.0718). More important, the variance between individual steps significantly decreased after feedback (P=0.018). Ambulatory monitoring weight bearing after lower extremity fractures showed that therapy compliance is low, both in full and touch-down weight bearing. Real-time visual biofeedback resulted in significantly higher peak loads in full weight bearing and increased accuracy of individual steps in touch-down weight bearing. Real-time visual biofeedback therefore results in improved therapy compliance after lower extremity fractures. Copyright © 2017 Elsevier B.V. All rights reserved.
Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.
Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders
2017-10-01
The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].
Code of Federal Regulations, 2011 CFR
2011-07-01
....65 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY: STANDARDS FOR CONDUCTING ALL...
Code of Federal Regulations, 2014 CFR
2014-07-01
....65 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY: STANDARDS FOR CONDUCTING ALL...
Code of Federal Regulations, 2012 CFR
2012-07-01
....65 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY: STANDARDS FOR CONDUCTING ALL...
Code of Federal Regulations, 2013 CFR
2013-07-01
....65 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OIL SPILL LIABILITY: STANDARDS FOR CONDUCTING ALL...
NASA Technical Reports Server (NTRS)
Tunstel, E.; Howard, A.; Edwards, D.; Carlson, A.
2001-01-01
This paper presents a technique for learning to assess terrain traversability for outdoor mobile robot navigation using human-embedded logic and real-time perception of terrain features extracted from image data.
Reschechtko, Sasha; Hasanbarani, Fariba; Akulin, Vladimir M; Latash, Mark L
2017-05-14
The study explored unintentional force changes elicited by removing visual feedback during cyclical, two-finger isometric force production tasks. Subjects performed two types of tasks at 1Hz, paced by an auditory metronome. One - Force task - required cyclical changes in total force while maintaining the sharing, defined as relative contribution of a finger to total force. The other task - Share task - required cyclical changes in sharing while keeping total force unchanged. Each trial started under full visual feedback on both force and sharing; subsequently, feedback on the variable that was instructed to stay constant was frozen, and finally feedback on the other variable was also removed. In both tasks, turning off visual feedback on total force elicited a drop in the mid-point of the force cycle and an increase in the peak-to-peak force amplitude. Turning off visual feedback on sharing led to a drift of mean share toward 50:50 across both tasks. Without visual feedback there was consistent deviation of the two force time series from the in-phase pattern (typical of the Force task) and from the out-of-phase pattern (typical of the Share task). This finding is in contrast to most earlier studies that demonstrated only two stable patterns, in-phase and out-of-phase. We interpret the results as consequences of drifts of parameters in a dynamical system leading in particular to drifts in the referent finger coordinates toward their actual coordinates. The relative phase desynchronization is caused by the right-left differences in the hypothesized drift processes, consistent with the dynamic dominance hypothesis. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback
Hwang, Ing-Shiou; Lin, Yen-Ting; Huang, Wei-Min; Yang, Zong-Ru; Hu, Chia-Ling; Chen, Yi-Ching
2017-01-01
Discharge patterns from a population of motor units (MUs) were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF) to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF). In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13–35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band. PMID:28125658
Reschechtko, Sasha; Hasanbarani, Fariba; Akulin, Vladimir M.; Latash, Mark L.
2017-01-01
The study explored unintentional force changes elicited by removing visual feedback during cyclical, two-finger isometric force production tasks. Subjects performed two types of tasks at 1 Hz, paced by an auditory metronome. One – Force task – required cyclical changes in total force while maintaining the sharing, defined as relative contribution of a finger to total force. The other task – Share task – required cyclical changes in sharing while keeping total force unchanged. Each trial started under full visual feedback on both force and sharing; subsequently, feedback on the variable that was instructed to stay constant was frozen, and finally feedback on the other variable was also removed. In both tasks, turning off visual feedback on total force elicited a drop in the mid-point of the force cycle and an increase in the peak-to-peak force amplitude. Turning off visual feedback on sharing led to a drift of mean share toward 50:50 across both tasks. Without visual feedback there was consistent deviation of the two force time series from the in-phase pattern (typical of the Force task) and from the out-of-phase pattern (typical of the Share task). This finding is in contrast to most earlier studies that demonstrated only two stable patterns, in-phase and out-of-phase. We interpret the results as consequences of drifts of parameters in a dynamical system leading in particular to drifts in the referent finger coordinates toward their actual coordinates. The relative phase desynchronization is caused by the right-left differences in the hypothesized drift processes, consistent with the dynamic dominance hypothesis. PMID:28344070
Autonomous assistance navigation for robotic wheelchairs in confined spaces.
Cheein, Fernando Auat; Carelli, Ricardo; De la Cruz, Celso; Muller, Sandra; Bastos Filho, Teodiano F
2010-01-01
In this work, a visual interface for the assistance of a robotic wheelchair's navigation is presented. The visual interface is developed for the navigation in confined spaces such as narrows corridors or corridor-ends. The interface performs two navigation modus: non-autonomous and autonomous. The non-autonomous driving of the robotic wheelchair is made by means of a hand-joystick. The joystick directs the motion of the vehicle within the environment. The autonomous driving is performed when the user of the wheelchair has to turn (90, 90 or 180 degrees) within the environment. The turning strategy is performed by a maneuverability algorithm compatible with the kinematics of the wheelchair and by the SLAM (Simultaneous Localization and Mapping) algorithm. The SLAM algorithm provides the interface with the information concerning the environment disposition and the pose -position and orientation-of the wheelchair within the environment. Experimental and statistical results of the interface are also shown in this work.
The effect of haptic guidance and visual feedback on learning a complex tennis task.
Marchal-Crespo, Laura; van Raai, Mark; Rauter, Georg; Wolf, Peter; Riener, Robert
2013-11-01
While haptic guidance can improve ongoing performance of a motor task, several studies have found that it ultimately impairs motor learning. However, some recent studies suggest that the haptic demonstration of optimal timing, rather than movement magnitude, enhances learning in subjects trained with haptic guidance. Timing of an action plays a crucial role in the proper accomplishment of many motor skills, such as hitting a moving object (discrete timing task) or learning a velocity profile (time-critical tracking task). The aim of the present study is to evaluate which feedback conditions-visual or haptic guidance-optimize learning of the discrete and continuous elements of a timing task. The experiment consisted in performing a fast tennis forehand stroke in a virtual environment. A tendon-based parallel robot connected to the end of a racket was used to apply haptic guidance during training. In two different experiments, we evaluated which feedback condition was more adequate for learning: (1) a time-dependent discrete task-learning to start a tennis stroke and (2) a tracking task-learning to follow a velocity profile. The effect that the task difficulty and subject's initial skill level have on the selection of the optimal training condition was further evaluated. Results showed that the training condition that maximizes learning of the discrete time-dependent motor task depends on the subjects' initial skill level. Haptic guidance was especially suitable for less-skilled subjects and in especially difficult discrete tasks, while visual feedback seems to benefit more skilled subjects. Additionally, haptic guidance seemed to promote learning in a time-critical tracking task, while visual feedback tended to deteriorate the performance independently of the task difficulty and subjects' initial skill level. Haptic guidance outperformed visual feedback, although additional studies are needed to further analyze the effect of other types of feedback visualization on motor learning of time-critical tasks.
Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.
Loria, Tristan; de Grosbois, John; Tremblay, Luc
2016-09-01
At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.
Age-related similarities and differences in monitoring spatial cognition.
Ariel, Robert; Moffat, Scott D
2018-05-01
Spatial cognitive performance is impaired in later adulthood but it is unclear whether the metacognitive processes involved in monitoring spatial cognitive performance are also compromised. Inaccurate monitoring could affect whether people choose to engage in tasks that require spatial thinking and also the strategies they use in spatial domains such as navigation. The current experiment examined potential age differences in monitoring spatial cognitive performance in a variety of spatial domains including visual-spatial working memory, spatial orientation, spatial visualization, navigation, and place learning. Younger and older adults completed a 2D mental rotation test, 3D mental rotation test, paper folding test, spatial memory span test, two virtual navigation tasks, and a cognitive mapping test. Participants also made metacognitive judgments of performance (confidence judgments, judgments of learning, or navigation time estimates) on each trial for all spatial tasks. Preference for allocentric or egocentric navigation strategies was also measured. Overall, performance was poorer and confidence in performance was lower for older adults than younger adults. In most spatial domains, the absolute and relative accuracy of metacognitive judgments was equivalent for both age groups. However, age differences in monitoring accuracy (specifically relative accuracy) emerged in spatial tasks involving navigation. Confidence in navigating for a target location also mediated age differences in allocentric navigation strategy use. These findings suggest that with the possible exception of navigation monitoring, spatial cognition may be spared from age-related decline even though spatial cognition itself is impaired in older age.
Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types
Chiou, Shiau-Chuen; Chang, Erik Chihhung
2016-01-01
Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential “guidance effect” between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination. PMID:26895286
Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types.
Chiou, Shiau-Chuen; Chang, Erik Chihhung
2016-01-01
Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential "guidance effect" between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination.
Marzullo, Timothy Charles; Lehmkuhle, Mark J; Gage, Gregory J; Kipke, Daryl R
2010-04-01
Closed-loop neural interface technology that combines neural ensemble decoding with simultaneous electrical microstimulation feedback is hypothesized to improve deep brain stimulation techniques, neuromotor prosthetic applications, and epilepsy treatment. Here we describe our iterative results in a rat model of a sensory and motor neurophysiological feedback control system. Three rats were chronically implanted with microelectrode arrays in both the motor and visual cortices. The rats were subsequently trained over a period of weeks to modulate their motor cortex ensemble unit activity upon delivery of intra-cortical microstimulation (ICMS) of the visual cortex in order to receive a food reward. Rats were given continuous feedback via visual cortex ICMS during the response periods that was representative of the motor cortex ensemble dynamics. Analysis revealed that the feedback provided the animals with indicators of the behavioral trials. At the hardware level, this preparation provides a tractable test model for improving the technology of closed-loop neural devices.
NASA Technical Reports Server (NTRS)
Gaillard, J. P.
1981-01-01
The possibility to use an electrotactile stimulation in teleoperation and to observe the interpretation of such information as a feedback to the operator was investigated. It is proposed that visual feedback is more informative than an electrotactile one; and that complex electrotactile feedback slows down both the motor decision and motor response processes, is processed as an all or nothing signal, and bypasses the receptive structure and accesses directly in a working memory where information is sequentially processed and where memory is limited in treatment capacity. The electrotactile stimulation is used as an alerting signal. It is suggested that the visual dominance effect is the result of the advantage of both a transfer function and a sensory memory register where information is pretreated and memorized for a short time. It is found that dividing attention has an effect on the acquisition of the information but not on the subsequent decision processes.
A direct comparison of short-term audiomotor and visuomotor memory.
Ward, Amanda M; Loucks, Torrey M; Ofori, Edward; Sosnoff, Jacob J
2014-04-01
Audiomotor and visuomotor short-term memory are required for an important variety of skilled movements but have not been compared in a direct manner previously. Audiomotor memory capacity might be greater to accommodate auditory goals that are less directly related to movement outcome than for visually guided tasks. Subjects produced continuous isometric force with the right index finger under auditory and visual feedback. During the first 10 s of each trial, subjects received continuous auditory or visual feedback. For the following 15 s, feedback was removed but the force had to be maintained accurately. An internal effort condition was included to test memory capacity in the same manner but without external feedback. Similar decay times of ~5-6 s were found for vision and audition but the decay time for internal effort was ~4 s. External feedback thus provides an advantage in maintaining a force level after feedback removal, but may not exclude some contribution from a sense of effort. Short-term memory capacity appears longer than certain previous reports but there may not be strong distinctions in capacity across different sensory modalities, at least for isometric force.
Effects of continuous visual feedback during sitting balance training in chronic stroke survivors.
Pellegrino, Laura; Giannoni, Psiche; Marinelli, Lucio; Casadio, Maura
2017-10-16
Postural control deficits are common in stroke survivors and often the rehabilitation programs include balance training based on visual feedback to improve the control of body position or of the voluntary shift of body weight in space. In the present work, a group of chronic stroke survivors, while sitting on a force plate, exercised the ability to control their Center of Pressure with a training based on continuous visual feedback. The goal of this study was to test if and to what extent chronic stroke survivors were able to learn the task and transfer the learned ability to a condition without visual feedback and to directions and displacement amplitudes different from those experienced during training. Eleven chronic stroke survivors (5 Male - 6 Female, age: 59.72 ± 12.84 years) participated in this study. Subjects were seated on a stool positioned on top of a custom-built force platform. Their Center of Pressure positions were mapped to the coordinate of a cursor on a computer monitor. During training, the cursor position was always displayed and the subjects were to reach targets by shifting their Center of Pressure by moving their trunk. Pre and post-training subjects were required to reach without visual feedback of the cursor the training targets as well as other targets positioned in different directions and displacement amplitudes. During training, most stroke survivors were able to perform the required task and to improve their performance in terms of duration, smoothness, and movement extent, although not in terms of movement direction. However, when we removed the visual feedback, most of them had no improvement with respect to their pre-training performance. This study suggests that postural training based exclusively on continuous visual feedback can provide limited benefits for stroke survivors, if administered alone. However, the positive gains observed during training justify the integration of this technology-based protocol in a well-structured and personalized physiotherapy training, where the combination of the two approaches may lead to functional recovery.
Fitts' Law in the Control of Isometric Grip Force With Naturalistic Targets.
Thumser, Zachary C; Slifkin, Andrew B; Beckler, Dylan T; Marasco, Paul D
2018-01-01
Fitts' law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts' law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts' law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts' law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts' law (average r 2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts' law for explicit targets with vision ( r 2 = 0.96) and implicit targets ( r 2 = 0.89), but not as well-described for explicit targets without vision ( r 2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts' law to quantify the relative speed-accuracy relationship of any given grasper.
Patient Navigators Connecting Patients to Community Resources to Improve Diabetes Outcomes.
Loskutova, Natalia Y; Tsai, Adam G; Fisher, Edwin B; LaCruz, Debby M; Cherrington, Andrea L; Harrington, T Michael; Turner, Tamela J; Pace, Wilson D
2016-01-01
Despite the recognized importance of lifestyle modification in reducing risk of developing type 2 diabetes and in diabetes management, the use of available community resources by both patients and their primary care providers (PCPs) remains low. The patient navigator model, widely used in cancer care, may have the potential to link PCPs and community resources for reduction of risk and control of type 2 diabetes. In this study we tested the feasibility and acceptability of telephone-based nonprofessional patient navigation to promote linkages between the PCP office and community programs for patients with or at risk for diabetes. This was a mixed-methods interventional prospective cohort study conducted between November 2012 and August 2013. We included adult patients with and at risk for type 2 diabetes from six primary care practices. Patient-level measures of glycemic control, diabetes care, and self-efficacy from medical records, and qualitative interview data on acceptability and feasibility, were used. A total of 179 patients participated in the study. Two patient navigators provided services over the phone, using motivational interviewing techniques. Patient navigators provided regular feedback to PCPs and followed up with the patients through phone calls. The patient navigators made 1028 calls, with an average of 6 calls per patient. At follow-up, reduction in HbA1c (7.8 ± 1.9% vs 7.2 ± 1.3%; P = .001) and improvement in patient self-efficacy (3.1 ± 0.8 vs 3.6 ± 0.7; P < .001) were observed. Qualitative analysis revealed uniformly positive feedback from providers and patients. The patient navigator model is a promising and acceptable strategy to link patient, PCP, and community resources for promoting lifestyle modification in people living with or at risk for type 2 diabetes. © Copyright 2016 by the American Board of Family Medicine.
NASA Astrophysics Data System (ADS)
Jakubovic, Raphael; Gupta, Shuarya; Guha, Daipayan; Mainprize, Todd; Yang, Victor X. D.
2017-02-01
Cranial neurosurgical procedures are especially delicate considering that the surgeon must localize the subsurface anatomy with limited exposure and without the ability to see beyond the surface of the surgical field. Surgical accuracy is imperative as even minor surgical errors can cause major neurological deficits. Traditionally surgical precision was highly dependent on surgical skill. However, the introduction of intraoperative surgical navigation has shifted the paradigm to become the current standard of care for cranial neurosurgery. Intra-operative image guided navigation systems are currently used to allow the surgeon to visualize the three-dimensional subsurface anatomy using pre-acquired computed tomography (CT) or magnetic resonance (MR) images. The patient anatomy is fused to the pre-acquired images using various registration techniques and surgical tools are typically localized using optical tracking methods. Although these techniques positively impact complication rates, surgical accuracy is limited by the accuracy of the navigation system and as such quantification of surgical error is required. While many different measures of registration accuracy have been presented true navigation accuracy can only be quantified post-operatively by comparing a ground truth landmark to the intra-operative visualization. In this study we quantified the accuracy of cranial neurosurgical procedures using a novel optical surface imaging navigation system to visualize the three-dimensional anatomy of the surface anatomy. A tracked probe was placed on the screws of cranial fixation plates during surgery and the reported position of the centre of the screw was compared to the co-ordinates of the post-operative CT or MR images, thus quantifying cranial neurosurgical error.
Similar brain networks for detecting visuo-motor and visuo-proprioceptive synchrony.
Balslev, Daniela; Nielsen, Finn A; Lund, Torben E; Law, Ian; Paulson, Olaf B
2006-05-15
The ability to recognize feedback from own movement as opposed to the movement of someone else is important for motor control and social interaction. The neural processes involved in feedback recognition are incompletely understood. Two competing hypotheses have been proposed: the stimulus is compared with either (a) the proprioceptive feedback or with (b) the motor command and if they match, then the external stimulus is identified as feedback. Hypothesis (a) predicts that the neural mechanisms or brain areas involved in distinguishing self from other during passive and active movement are similar, whereas hypothesis (b) predicts that they are different. In this fMRI study, healthy subjects saw visual cursor movement that was either synchronous or asynchronous with their active or passive finger movements. The aim was to identify the brain areas where the neural activity depended on whether the visual stimulus was feedback from own movement and to contrast the functional activation maps for active and passive movement. We found activity increases in the right temporoparietal cortex in the condition with asynchronous relative to synchronous visual feedback from both active and passive movements. However, no statistically significant difference was found between these sets of activated areas when the active and passive movement conditions were compared. With a posterior probability of 0.95, no brain voxel had a contrast effect above 0.11% of the whole-brain mean signal. These results do not support the hypothesis that recognition of visual feedback during active and passive movement relies on different brain areas.
Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin
2017-01-01
There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation. PMID:28225811
A unified framework for image retrieval using keyword and visual features.
Jing, Feng; Li, Mingling; Zhang, Hong-Jiang; Zhang, Bo
2005-07-01
In this paper, a unified image retrieval framework based on both keyword annotations and visual features is proposed. In this framework, a set of statistical models are built based on visual features of a small set of manually labeled images to represent semantic concepts and used to propagate keywords to other unlabeled images. These models are updated periodically when more images implicitly labeled by users become available through relevance feedback. In this sense, the keyword models serve the function of accumulation and memorization of knowledge learned from user-provided relevance feedback. Furthermore, two sets of effective and efficient similarity measures and relevance feedback schemes are proposed for query by keyword scenario and query by image example scenario, respectively. Keyword models are combined with visual features in these schemes. In particular, a new, entropy-based active learning strategy is introduced to improve the efficiency of relevance feedback for query by keyword. Furthermore, a new algorithm is proposed to estimate the keyword features of the search concept for query by image example. It is shown to be more appropriate than two existing relevance feedback algorithms. Experimental results demonstrate the effectiveness of the proposed framework.
Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin
2017-01-01
There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation.
Visual feedback in stuttering therapy
NASA Astrophysics Data System (ADS)
Smolka, Elzbieta
1997-02-01
The aim of this paper is to present the results concerning the influence of visual echo and reverberation on the speech process of stutterers. Visual stimuli along with the influence of acoustic and visual-acoustic stimuli have been compared. Following this the methods of implementing visual feedback with the aid of electroluminescent diodes directed by speech signals have been presented. The concept of a computerized visual echo based on the acoustic recognition of Polish syllabic vowels has been also presented. All the research nd trials carried out at our center, aside from cognitive aims, generally aim at the development of new speech correctors to be utilized in stuttering therapy.
Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios
2018-06-21
Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.
Martínez-Cañada, Pablo; Halnes, Geir; Fyhn, Marianne
2018-01-01
Despite half-a-century of research since the seminal work of Hubel and Wiesel, the role of the dorsal lateral geniculate nucleus (dLGN) in shaping the visual signals is not properly understood. Placed on route from retina to primary visual cortex in the early visual pathway, a striking feature of the dLGN circuit is that both the relay cells (RCs) and interneurons (INs) not only receive feedforward input from retinal ganglion cells, but also a prominent feedback from cells in layer 6 of visual cortex. This feedback has been proposed to affect synchronicity and other temporal properties of the RC firing. It has also been seen to affect spatial properties such as the center-surround antagonism of thalamic receptive fields, i.e., the suppression of the response to very large stimuli compared to smaller, more optimal stimuli. Here we explore the spatial effects of cortical feedback on the RC response by means of a a comprehensive network model with biophysically detailed, single-compartment and multicompartment neuron models of RCs, INs and a population of orientation-selective layer 6 simple cells, consisting of pyramidal cells (PY). We have considered two different arrangements of synaptic feedback from the ON and OFF zones in the visual cortex to the dLGN: phase-reversed (‘push-pull’) and phase-matched (‘push-push’), as well as different spatial extents of the corticothalamic projection pattern. Our simulation results support that a phase-reversed arrangement provides a more effective way for cortical feedback to provide the increased center-surround antagonism seen in experiments both for flashing spots and, even more prominently, for patch gratings. This implies that ON-center RCs receive direct excitation from OFF-dominated cortical cells and indirect inhibitory feedback from ON-dominated cortical cells. The increased center-surround antagonism in the model is accompanied by spatial focusing, i.e., the maximum RC response occurs for smaller stimuli when feedback is present. PMID:29377888
Visualizing Syllables: Real-Time Computerized Feedback within a Speech-Language Intervention
ERIC Educational Resources Information Center
DeThorne, Laura; Aparicio Betancourt, Mariana; Karahalios, Karrie; Halle, Jim; Bogue, Ellen
2015-01-01
Computerized technologies now offer unprecedented opportunities to provide real-time visual feedback to facilitate children's speech-language development. We employed a mixed-method design to examine the effectiveness of two speech-language interventions aimed at facilitating children's multisyllabic productions: one incorporated a novel…
The Inversion of Sensory Processing by Feedback Pathways: A Model of Visual Cognitive Functions.
ERIC Educational Resources Information Center
Harth, E.; And Others
1987-01-01
Explains the hierarchic structure of the mammalian visual system. Proposes a model in which feedback pathways serve to modify sensory stimuli in ways that enhance and complete sensory input patterns. Investigates the functioning of the system through computer simulations. (ML)
Navigation and Image Injection for Control of Bone Removal and Osteotomy Planes in Spine Surgery.
Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven Rainer; Archavlis, Elefterios; Giese, Alf
2017-04-01
In contrast to cranial interventions, neuronavigation in spinal surgery is used in few applications, not tapping into its full technological potential. We have developed a method to preoperatively create virtual resection planes and volumes for spinal osteotomies and export 3-D operation plans to a navigation system controlling intraoperative visualization using a surgical microscope's head-up display. The method was developed using a Sawbone ® model of the lumbar spine, demonstrating feasibility with high precision. Computer tomographic and magnetic resonance image data were imported into Amira ® , a 3-D visualization software. Resection planes were positioned, and resection volumes representing intraoperative bone removal were defined. Fused to the original Digital Imaging and Communications in Medicine data, the osteotomy planes were exported to the cranial version of a Brainlab ® navigation system. A navigated surgical microscope with video connection to the navigation system allowed intraoperative image injection to visualize the preplanned resection planes. The workflow was applied to a patient presenting with a congenital hemivertebra of the thoracolumbar spine. Dorsal instrumentation with pedicle screws and rods was followed by resection of the deformed vertebra guided by the in-view image injection of the preplanned resection planes into the optical path of a surgical microscope. Postoperatively, the patient showed no neurological deficits, and the spine was found to be restored in near physiological posture. The intraoperative visualization of resection planes in a microscope's head-up display was found to assist the surgeon during the resection of a complex-shaped bone wedge and may help to further increase accuracy and patient safety. Copyright © 2017 by the Congress of Neurological Surgeons
Learning to See: Enhancing Student Learning through Videotaped Feedback
ERIC Educational Resources Information Center
Yakura, Elaine K.
2009-01-01
Feedback is crucial to developing skills, but meaningful feedback is difficult to provide. Classroom videotaping can provide effective feedback on student performance, but for video feedback to be most helpful, students must develop a type of "visual intelligence"--analytical skills that increase critical thinking and self-awareness. The author…
NASA Technical Reports Server (NTRS)
Uhlemann, H.; Geiser, G.
1975-01-01
Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.
2013-05-29
not necessarily express the views of and should not be attributed to ESA. 1 and visual navigation to maneuver autonomously to reduce the size of the...successful orbit and three-dimensional imaging of an RSO, using passive visual -only navigation and real-time near-optimal guidance. The mission design...Kit ( STK ) in the Earth-centered Earth-fixed (ECF) co- ordinate system, loaded to Simulink and transformed to the BFF for calculation of the SRP
A real-time plantar pressure feedback device for foot unloading.
Femery, Virginie G; Moretto, Pierre G; Hespel, Jean-Michel G; Thévenon, André; Lensel, Ghislaine
2004-10-01
To develop and test a plantar pressure control device that provides both visual and auditory feedback and is suitable for correcting plantar pressure distribution patterns in persons susceptible to neuropathic foot ulceration. Pilot test. Sports medicine laboratory in a university in France. One healthy man in his mid thirties. Not applicable. Main outcome measures A device was developed based on real-time feedback, incorporating an acoustic alarm and visual signals, adjusted to a specific pressure load. Plantar pressure measured during walking, at 6 sensor locations over 27 steps under 2 different conditions: (1) natural and (2) unloaded in response to device feedback. The subject was able to modify his gait in response to the auditory and visual signals. He did not compensate for the decrease of peak pressure under the first metarsal by increasing the duration of the load shift under this area. Gait pattern modification centered on a mediolateral load shift. The auditory signal provided a warning system alerting the user to potentially harmful plantar pressures. The visual signal warned of the degree of pressure. People who have lost nociceptive perception, as in cases of diabetic neuropathy, may be able to change their walking pattern in response to the feedback provided by this device. The visual may have diagnostic value in determining plantar pressures in such patients. This pilot test indicates that further studies are warranted.
Tang, Rixin; Whitwell, Robert L; Goodale, Melvyn A
2014-01-01
Previous research (Whitwell et al. in Exp Brain Res 188:603-611, 2008; Whitwell and Goodale in Exp Brain Res 194:619-629, 2009) has shown that trial history, but not anticipatory knowledge about the presence or absence of visual feedback on an upcoming trial, plays a vital role in determining how that feedback is exploited when grasping with the right hand. Nothing is known about how the non-dominant left hand behaves under the same feedback regimens. In present study, therefore, we compared peak grip aperture (PGA) for left- and right-hand grasps executed with and without visual feedback (i.e., closed- vs. open-loop conditions) in right-handed individuals under three different trial schedules: the feedback conditions were blocked separately, they were randomly interleaved, or they were alternated. When feedback conditions were blocked, the PGA was much larger for open-loop trials as compared to closed-loop trials, although this difference was more pronounced for right-hand grasps than left-hand grasps. Like Whitwell et al., we found that mixing open- and closed-loop trials together, compared to blocking them separately, homogenized the PGA for open- and closed-loop grasping in the right hand (i.e., the PGAs became smaller on open-loop trials and larger on closed-loop trials). In addition, the PGAs for right-hand grasps were entirely determined by trial history and not by knowledge of whether or not visual feedback would be available on an upcoming trial. In contrast to grasps made with the right hand, grasps made by the left hand were affected both by trial history and by anticipatory knowledge of the upcoming visual feedback condition. But these effects were observed only on closed-loop trials, i.e., the PGAs of grasps made with the left hand on closed-loop trials were smaller when participants could anticipate the availability of feedback on an upcoming trial (alternating trials) than when they could not (randomized trials). In contrast, grasps made with the left hand on open-loop trials exhibited the same large PGAs under all feedback schedules: blocked, random, or alternating. In other words, there was no evidence for homogenization. Taken together, these results suggest that in addition to the real-time demands of the task, such as the target's size and position and the availability of visual feedback, the initial (i.e., pre-movement) programming of right-hand grasping relies on what happened on the previous trial, whereas the programming of left-hand grasping is more cognitively supervised and exploits explicit information about trial order to prepare for an upcoming trial.
Adaptive Locomotor Behavior in Larval Zebrafish
Portugues, Ruben; Engert, Florian
2011-01-01
In this study we report that larval zebrafish display adaptive locomotor output that can be driven by unexpected visual feedback. We develop a new assay that addresses visuomotor integration in restrained larval zebrafish. The assay involves a closed-loop environment in which the visual feedback a larva receives depends on its own motor output in a way that resembles freely swimming conditions. The experimenter can control the gain of this closed feedback loop, so that following a given motor output the larva experiences more or less visual feedback depending on whether the gain is high or low. We show that increases and decreases in this gain setting result in adaptive changes in behavior that lead to a generalized decrease or increase of motor output, respectively. Our behavioral analysis shows that both the duration and tail beat frequency of individual swim bouts can be modified, as well as the frequency with which bouts are elicited. These changes can be implemented rapidly, following an exposure to a new gain of just 175 ms. In addition, modifications in some behavioral parameters accumulate over tens of seconds and effects last for at least 30 s from trial to trial. These results suggest that larvae establish an internal representation of the visual feedback expected from a given motor output and that the behavioral modifications are driven by an error signal that arises from the discrepancy between this expectation and the actual visual feedback. The assay we develop presents a unique possibility for studying visuomotor integration using imaging techniques available in the larval zebrafish. PMID:21909325
Adaptive locomotor behavior in larval zebrafish.
Portugues, Ruben; Engert, Florian
2011-01-01
In this study we report that larval zebrafish display adaptive locomotor output that can be driven by unexpected visual feedback. We develop a new assay that addresses visuomotor integration in restrained larval zebrafish. The assay involves a closed-loop environment in which the visual feedback a larva receives depends on its own motor output in a way that resembles freely swimming conditions. The experimenter can control the gain of this closed feedback loop, so that following a given motor output the larva experiences more or less visual feedback depending on whether the gain is high or low. We show that increases and decreases in this gain setting result in adaptive changes in behavior that lead to a generalized decrease or increase of motor output, respectively. Our behavioral analysis shows that both the duration and tail beat frequency of individual swim bouts can be modified, as well as the frequency with which bouts are elicited. These changes can be implemented rapidly, following an exposure to a new gain of just 175 ms. In addition, modifications in some behavioral parameters accumulate over tens of seconds and effects last for at least 30 s from trial to trial. These results suggest that larvae establish an internal representation of the visual feedback expected from a given motor output and that the behavioral modifications are driven by an error signal that arises from the discrepancy between this expectation and the actual visual feedback. The assay we develop presents a unique possibility for studying visuomotor integration using imaging techniques available in the larval zebrafish.
Playing in the "Gutter": Cultivating Creativity in Medical Education and Practice.
Liou, Kevin T; Jamorabo, Daniel S; Dollase, Richard H; Dumenco, Luba; Schiffman, Fred J; Baruch, Jay M
2016-03-01
In comics, "gutters" are the empty spaces between panels that readers must navigate to weave disjointed visual sequences into coherent narratives. A gutter, however, is more than a blank space--it represents a creative zone for making connections and for constructing meaning from disparate ideas, values, and experiences. Over the course of medical training, learners encounter various "gutters" created by the disconnected subject blocks and learning experiences within the curriculum, the ambiguity and uncertainty of medical practice, and the conflicts and tensions within clinical encounters. Navigating these gutters requires not only medical knowledge and skills but also creativity, defined as the ability to make connections between disparate fragments to create meaningful, new configurations. To cultivate medical students' creative capacity, the authors developed the Integrated Clinical Arts (ICA) program, a required component of the first-year curriculum at the Warren Alpert Medical School of Brown University. ICA workshops are designed to place students in a metaphorical gutter, wherein they can practice making connections between medicine and arts-based disciplines. By playing in the gutter, students have opportunities to broaden their perspectives, gain new insights into both medical practice and themselves, and explore different ways of making meaning. Student feedback on the ICA program highlights an important role for creativity and the arts in medicine: to transform gutters from potential learning barriers into opportunities for discovery, self-reflection, and personal growth.
Occlusion-free animation of driving routes for car navigation systems.
Takahashi, Shigeo; Yoshida, Kenichi; Shimada, Kenji; Nishita, Tomoyuki
2006-01-01
This paper presents a method for occlusion-free animation of geographical landmarks, and its application to a new type of car navigation system in which driving routes of interest are always visible. This is achieved by animating a nonperspective image where geographical landmarks such as mountain tops and roads are rendered as if they are seen from different viewpoints. The technical contribution of this paper lies in formulating the nonperspective terrain navigation as an inverse problem of continuously deforming a 3D terrain surface from the 2D screen arrangement of its associated geographical landmarks. The present approach provides a perceptually reasonable compromise between the navigation clarity and visual realism where the corresponding nonperspective view is fully augmented by assigning appropriate textures and shading effects to the terrain surface according to its geometry. An eye tracking experiment is conducted to prove that the present approach actually exhibits visually-pleasing navigation frames while users can clearly recognize the shape of the driving route without occlusion, together with the spatial configuration of geographical landmarks in its neighborhood.
Telgen, Sebastian; Parvin, Darius; Diedrichsen, Jörn
2014-10-08
Motor learning tasks are often classified into adaptation tasks, which involve the recalibration of an existing control policy (the mapping that determines both feedforward and feedback commands), and skill-learning tasks, requiring the acquisition of new control policies. We show here that this distinction also applies to two different visuomotor transformations during reaching in humans: Mirror-reversal (left-right reversal over a mid-sagittal axis) of visual feedback versus rotation of visual feedback around the movement origin. During mirror-reversal learning, correct movement initiation (feedforward commands) and online corrections (feedback responses) were only generated at longer latencies. The earliest responses were directed into a nonmirrored direction, even after two training sessions. In contrast, for visual rotation learning, no dependency of directional error on reaction time emerged, and fast feedback responses to visual displacements of the cursor were immediately adapted. These results suggest that the motor system acquires a new control policy for mirror reversal, which initially requires extra processing time, while it recalibrates an existing control policy for visual rotations, exploiting established fast computational processes. Importantly, memory for visual rotation decayed between sessions, whereas memory for mirror reversals showed offline gains, leading to better performance at the beginning of the second session than in the end of the first. With shifts in time-accuracy tradeoff and offline gains, mirror-reversal learning shares common features with other skill-learning tasks. We suggest that different neuronal mechanisms underlie the recalibration of an existing versus acquisition of a new control policy and that offline gains between sessions are a characteristic of latter. Copyright © 2014 the authors 0270-6474/14/3413768-12$15.00/0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parkhurst, James M.; Price, Gareth J., E-mail: gareth.price@christie.nhs.uk; Faculty of Medical and Human Sciences, Manchester Academic Health Sciences Centre, University of Manchester, Manchester
2013-12-01
Purpose: We present the results of a clinical feasibility study, performed in 10 healthy volunteers undergoing a simulated treatment over 3 sessions, to investigate the use of a wide-field visual feedback technique intended to help patients control their pose while reducing motion during radiation therapy treatment. Methods and Materials: An optical surface sensor is used to capture wide-area measurements of a subject's body surface with visualizations of these data displayed back to them in real time. In this study we hypothesize that this active feedback mechanism will enable patients to control their motion and help them maintain their setup posemore » and position. A capability hierarchy of 3 different level-of-detail abstractions of the measured surface data is systematically compared. Results: Use of the device enabled volunteers to increase their conformance to a reference surface, as measured by decreased variability across their body surfaces. The use of visual feedback also enabled volunteers to reduce their respiratory motion amplitude to 1.7 ± 0.6 mm compared with 2.7 ± 1.4 mm without visual feedback. Conclusions: The use of live feedback of their optically measured body surfaces enabled a set of volunteers to better manage their pose and motion when compared with free breathing. The method is suitable to be taken forward to patient studies.« less
Effects of acoustic feedback training in elite-standard Para-Rowing.
Schaffert, Nina; Mattes, Klaus
2015-01-01
Assessment and feedback devices have been regularly used in technique training in high-performance sports. Biomechanical analysis is mainly visually based and so can exclude athletes with visual impairments. The aim of this study was to examine the effects of auditory feedback on mean boat speed during on-water training of visually impaired athletes. The German National Para-Rowing team (six athletes, mean ± s, age 34.8 ± 10.6 years, body mass 76.5 ± 13.5 kg, stature 179.3 ± 8.6 cm) participated in the study. Kinematics included boat acceleration and distance travelled, collected with Sofirow at two intensities of training. The boat acceleration-time traces were converted online into acoustic feedback and presented via speakers during rowing (sections with and without alternately). Repeated-measures within-participant factorial ANOVA showed greater boat speed with acoustic feedback than baseline (0.08 ± 0.01 m·s(-1)). The time structure of rowing cycles was improved (extended time of positive acceleration). Questioning of athletes showed acoustic feedback to be a supportive training aid as it provided important functional information about the boat motion independent of vision. It gave access for visually impaired athletes to biomechanical analysis via auditory information. The concept for adaptive athletes has been successfully integrated into the preparation for the Para-Rowing World Championships and Paralympics.
Mohanty, Suman; Greene, Rachel K.; Cook, Edwin H.; Vaillancourt, David E.; Sweeney, John A.
2015-01-01
Sensorimotor abnormalities are common in autism spectrum disorder (ASD) and among the earliest manifestations of the disorder. They have been studied far less than the social-communication and cognitive deficits that define ASD, but a mechanistic understanding of sensorimotor abnormalities in ASD may provide key insights into the neural underpinnings of the disorder. In this human study, we examined rapid, precision grip force contractions to determine whether feedforward mechanisms supporting initial motor output before sensory feedback can be processed are disrupted in ASD. Sustained force contractions also were examined to determine whether reactive adjustments to ongoing motor behavior based on visual feedback are altered. Sustained force was studied across multiple force levels and visual gains to assess motor and visuomotor mechanisms, respectively. Primary force contractions of individuals with ASD showed greater peak rate of force increases and large transient overshoots. Individuals with ASD also showed increased sustained force variability that scaled with force level and was more severe when visual gain was highly amplified or highly degraded. When sustaining a constant force level, their reactive adjustments were more periodic than controls, and they showed increased reliance on slower feedback mechanisms. Feedforward and feedback mechanism alterations each were associated with more severe social-communication impairments in ASD. These findings implicate anterior cerebellar circuits involved in feedforward motor control and posterior cerebellar circuits involved in transforming visual feedback into precise motor adjustments in ASD. PMID:25653359
Vrtička, Pascal; Sander, David; Anderson, Brittany; Badoud, Deborah; Eliez, Stephan; Debbané, Martin
2014-01-01
Objective The establishment of an accurate understanding of one's social context is a central developmental task during adolescence. A critical component of such development is to learn how to integrate the objective evaluation of one's behavior with the social response to the latter—here referred to as social feedback processing. Case report We measured brain activity by means of fMRI in 33 healthy adolescents (12–19 years old, 14 females). Participants played a difficult perceptual game with integrated verbal and visual feedback. Verbal feedback provided the participants with objective performance evaluation (won vs. lost). Visual feedback consisted of either smiling or angry faces, representing positive or negative social evaluations. Together, the combination of verbal and visual feedback gave rise to congruent versus incongruent social feedback combinations. In addition to assessing sex differences, we further tested for the effects of age and attachment style on social feedback processing. Results revealed that brain activity during social feedback processing was significantly modulated by sex, age, and attachment style in prefrontal cortical areas, ventral anterior cingulate cortex, anterior insula, caudate, and amygdala/hippocampus. We found indication for heightened activity during incongruent social feedback processing in females, older participants, and individuals with an anxious attachment style. Conversely, we observed stronger activity during processing of congruent social feedback in males and participants with an avoidant attachment style. Conclusion Our findings not only extend knowledge on the typical development of socio-emotional brain function during adolescence, but also provide first clues on how attachment insecurities, and particularly attachment avoidance, could interfere with the latter mechanisms. PMID:25328847
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
Rentschler, M E; Dumpert, J; Platt, S R; Ahmed, S I; Farritor, S M; Oleynikov, D
2006-01-01
The use of small incisions in laparoscopy reduces patient trauma, but also limits the surgeon's ability to view and touch the surgical environment directly. These limitations generally restrict the application of laparoscopy to procedures less complex than those performed during open surgery. Although current robot-assisted laparoscopy improves the surgeon's ability to manipulate and visualize the target organs, the instruments and cameras remain fundamentally constrained by the entry incisions. This limits tool tip orientation and optimal camera placement. The current work focuses on developing a new miniature mobile in vivo adjustable-focus camera robot to provide sole visual feedback to surgeons during laparoscopic surgery. A miniature mobile camera robot was inserted through a trocar into the insufflated abdominal cavity of an anesthetized pig. The mobile robot allowed the surgeon to explore the abdominal cavity remotely and view trocar and tool insertion and placement without entry incision constraints. The surgeon then performed a cholecystectomy using the robot camera alone for visual feedback. This successful trial has demonstrated that miniature in vivo mobile robots can provide surgeons with sufficient visual feedback to perform common procedures while reducing patient trauma.
Guidance/Navigation Requirements Study Final Report. Volume III. Appendices
1978-04-30
shown Figure G-2. The free-flight simulation program FFSIM uses quaternions to calculate the body attitude as a function of time. To calculate the...the lack of open-loop damping, the existence of a feedback controller which will stabilize the closed-loon system depends upon the satisfaction of a...re-entry vehicle has dynamic pecularitles which tend to discourage the use of "linear-quadratic" feedback regulators in guidance. The disadvantageous
UAV State Estimation Modeling Techniques in AHRS
NASA Astrophysics Data System (ADS)
Razali, Shikin; Zhahir, Amzari
2017-11-01
Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.
Improving Student Performance Using Nudge Analytics
ERIC Educational Resources Information Center
Feild, Jacqueline
2015-01-01
Providing students with continuous and personalized feedback on their performance is an important part of encouraging self regulated learning. As part of our higher education platform, we built a set of data visualizations to provide feedback to students on their assignment performance. These visualizations give students information about how they…
Brain-actuated gait trainer with visual and proprioceptive feedback
NASA Astrophysics Data System (ADS)
Liu, Dong; Chen, Weihai; Lee, Kyuhwa; Chavarriaga, Ricardo; Bouri, Mohamed; Pei, Zhongcai; Millán, José del R.
2017-10-01
Objective. Brain-machine interfaces (BMIs) have been proposed in closed-loop applications for neuromodulation and neurorehabilitation. This study describes the impact of different feedback modalities on the performance of an EEG-based BMI that decodes motor imagery (MI) of leg flexion and extension. Approach. We executed experiments in a lower-limb gait trainer (the legoPress) where nine able-bodied subjects participated in three consecutive sessions based on a crossover design. A random forest classifier was trained from the offline session and tested online with visual and proprioceptive feedback, respectively. Post-hoc classification was conducted to assess the impact of feedback modalities and learning effect (an improvement over time) on the simulated trial-based performance. Finally, we performed feature analysis to investigate the discriminant power and brain pattern modulations across the subjects. Main results. (i) For real-time classification, the average accuracy was 62.33 +/- 4.95 % and 63.89 +/- 6.41 % for the two online sessions. The results were significantly higher than chance level, demonstrating the feasibility to distinguish between MI of leg extension and flexion. (ii) For post-hoc classification, the performance with proprioceptive feedback (69.45 +/- 9.95 %) was significantly better than with visual feedback (62.89 +/- 9.20 %), while there was no significant learning effect. (iii) We reported individual discriminate features and brain patterns associated to each feedback modality, which exhibited differences between the two modalities although no general conclusion can be drawn. Significance. The study reported a closed-loop brain-controlled gait trainer, as a proof of concept for neurorehabilitation devices. We reported the feasibility of decoding lower-limb movement in an intuitive and natural way. As far as we know, this is the first online study discussing the role of feedback modalities in lower-limb MI decoding. Our results suggest that proprioceptive feedback has an advantage over visual feedback, which could be used to improve robot-assisted strategies for motor training and functional recovery.
Brain-actuated gait trainer with visual and proprioceptive feedback.
Liu, Dong; Chen, Weihai; Lee, Kyuhwa; Chavarriaga, Ricardo; Bouri, Mohamed; Pei, Zhongcai; Del R Millán, José
2017-10-01
Brain-machine interfaces (BMIs) have been proposed in closed-loop applications for neuromodulation and neurorehabilitation. This study describes the impact of different feedback modalities on the performance of an EEG-based BMI that decodes motor imagery (MI) of leg flexion and extension. We executed experiments in a lower-limb gait trainer (the legoPress) where nine able-bodied subjects participated in three consecutive sessions based on a crossover design. A random forest classifier was trained from the offline session and tested online with visual and proprioceptive feedback, respectively. Post-hoc classification was conducted to assess the impact of feedback modalities and learning effect (an improvement over time) on the simulated trial-based performance. Finally, we performed feature analysis to investigate the discriminant power and brain pattern modulations across the subjects. (i) For real-time classification, the average accuracy was [Formula: see text]% and [Formula: see text]% for the two online sessions. The results were significantly higher than chance level, demonstrating the feasibility to distinguish between MI of leg extension and flexion. (ii) For post-hoc classification, the performance with proprioceptive feedback ([Formula: see text]%) was significantly better than with visual feedback ([Formula: see text]%), while there was no significant learning effect. (iii) We reported individual discriminate features and brain patterns associated to each feedback modality, which exhibited differences between the two modalities although no general conclusion can be drawn. The study reported a closed-loop brain-controlled gait trainer, as a proof of concept for neurorehabilitation devices. We reported the feasibility of decoding lower-limb movement in an intuitive and natural way. As far as we know, this is the first online study discussing the role of feedback modalities in lower-limb MI decoding. Our results suggest that proprioceptive feedback has an advantage over visual feedback, which could be used to improve robot-assisted strategies for motor training and functional recovery.
Marginally perceptible outcome feedback, motor learning and implicit processes.
Masters, Rich S W; Maxwell, Jon P; Eves, Frank F
2009-09-01
Participants struck 500 golf balls to a concealed target. Outcome feedback was presented at the subjective or objective threshold of awareness of each participant or at a supraliminal threshold. Participants who received fully perceptible (supraliminal) feedback learned to strike the ball onto the target, as did participants who received feedback that was only marginally perceptible (subjective threshold). Participants who received feedback that was not perceptible (objective threshold) showed no learning. Upon transfer to a condition in which the target was unconcealed, performance increased in both the subjective and the objective threshold condition, but decreased in the supraliminal condition. In all three conditions, participants reported minimal declarative knowledge of their movements, suggesting that deliberate hypothesis testing about how best to move in order to perform the motor task successfully was disrupted by the impoverished disposition of the visual outcome feedback. It was concluded that sub-optimally perceptible visual feedback evokes implicit processes.
A neural model of motion processing and visual navigation by cortical area MST.
Grossberg, S; Mingolla, E; Pack, C
1999-12-01
Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.
Using Screencasts to Enhance Assessment Feedback: Students' Perceptions and Preferences
ERIC Educational Resources Information Center
Marriott, Pru; Teoh, Lim Keong
2012-01-01
In the UK, assessment and feedback have been regularly highlighted by the National Student Survey as critical aspects that require improvement. An innovative approach to delivering feedback that has proved successful in non-business-related disciplines is the delivery of audio and visual feedback using screencast technology. The feedback on…
Parahippocampal and retrosplenial contributions to human spatial navigation
Epstein, Russell A.
2010-01-01
Spatial navigation is a core cognitive ability in humans and animals. Neuroimaging studies have identified two functionally-defined brain regions that activate during navigational tasks and also during passive viewing of navigationally-relevant stimuli such as environmental scenes: the parahippocampal place area (PPA) and the retrosplenial complex (RSC). Recent findings indicate that the PPA and RSC play distinct and complementary roles in spatial navigation, with the PPA more concerned with representation of the local visual scene and RSC more concerned with situating the scene within the broader spatial environment. These findings are a first step towards understanding the separate components of the cortical network that mediates spatial navigation in humans. PMID:18760955
Sexual Orientation-Related Differences in Virtual Spatial Navigation and Spatial Search Strategies.
Rahman, Qazi; Sharp, Jonathan; McVeigh, Meadhbh; Ho, Man-Ling
2017-07-01
Spatial abilities are generally hypothesized to differ between men and women, and people with different sexual orientations. According to the cross-sex shift hypothesis, gay men are hypothesized to perform in the direction of heterosexual women and lesbian women in the direction of heterosexual men on cognitive tests. This study investigated sexual orientation differences in spatial navigation and strategy during a virtual Morris water maze task (VMWM). Forty-four heterosexual men, 43 heterosexual women, 39 gay men, and 34 lesbian/bisexual women (aged 18-54 years) navigated a desktop VMWM and completed measures of intelligence, handedness, and childhood gender nonconformity (CGN). We quantified spatial learning (hidden platform trials), probe trial performance, and cued navigation (visible platform trials). Spatial strategies during hidden and probe trials were classified into visual scanning, landmark use, thigmotaxis/circling, and enfilading. In general, heterosexual men scored better than women and gay men on some spatial learning and probe trial measures and used more visual scan strategies. However, some differences disappeared after controlling for age and estimated IQ (e.g., in visual scanning heterosexual men differed from women but not gay men). Heterosexual women did not differ from lesbian/bisexual women. For both sexes, visual scanning predicted probe trial performance. More feminine CGN scores were associated with lower performance among men and greater performance among women on specific spatial learning or probe trial measures. These results provide mixed evidence for the cross-sex shift hypothesis of sexual orientation-related differences in spatial cognition.
Mensh, B D; Aksay, E; Lee, D D; Seung, H S; Tank, D W
2004-03-01
To quantify performance of the goldfish oculomotor neural integrator and determine its dependence on visual feedback, we measured the relationship between eye drift-velocity and position during spontaneous gaze fixations in the light and in the dark. In the light, drift-velocities were typically less than 1 deg/s, similar to those observed in humans. During brief periods in darkness, drift-velocities were only slightly larger, but showed greater variance. One hour in darkness degraded fixation-holding performance. These findings suggest that while visual feedback is not essential for online fixation stability, it may be used to tune the mechanism of persistent neural activity in the oculomotor integrator.
Visuomotor adaptability in older adults with mild cognitive decline.
Schaffert, Jeffrey; Lee, Chi-Mei; Neill, Rebecca; Bo, Jin
2017-02-01
The current study examined the augmentation of error feedback on visuomotor adaptability in older adults with varying degrees of cognitive decline (assessed by the Montreal Cognitive Assessment; MoCA). Twenty-three participants performed a center-out computerized visuomotor adaptation task when the visual feedback of their hand movement error was presented in a regular (ratio=1:1) or enhanced (ratio=1:2) error feedback schedule. Results showed that older adults with lower scores on the MoCA had less adaptability than those with higher MoCA scores during the regular feedback schedule. However, participants demonstrated similar adaptability during the enhanced feedback schedule, regardless of their cognitive ability. Furthermore, individuals with lower MoCA scores showed larger after-effects in spatial control during the enhanced schedule compared to the regular schedule, whereas individuals with higher MoCA scores displayed the opposite pattern. Additional neuro-cognitive assessments revealed that spatial working memory and processing speed were positively related to motor adaptability during the regular scheduled but negatively related to adaptability during the enhanced schedule. We argue that individuals with mild cognitive decline employed different adaptation strategies when encountering enhanced visual feedback, suggesting older adults with mild cognitive impairment (MCI) may benefit from enhanced visual error feedback during sensorimotor adaptation. Copyright © 2016 Elsevier B.V. All rights reserved.
Neuronal connectome of a sensory-motor circuit for visual navigation
Randel, Nadine; Asadulina, Albina; Bezares-Calderón, Luis A; Verasztó, Csaba; Williams, Elizabeth A; Conzelmann, Markus; Shahidi, Réza; Jékely, Gáspár
2014-01-01
Animals use spatial differences in environmental light levels for visual navigation; however, how light inputs are translated into coordinated motor outputs remains poorly understood. Here we reconstruct the neuronal connectome of a four-eye visual circuit in the larva of the annelid Platynereis using serial-section transmission electron microscopy. In this 71-neuron circuit, photoreceptors connect via three layers of interneurons to motorneurons, which innervate trunk muscles. By combining eye ablations with behavioral experiments, we show that the circuit compares light on either side of the body and stimulates body bending upon left-right light imbalance during visual phototaxis. We also identified an interneuron motif that enhances sensitivity to different light intensity contrasts. The Platynereis eye circuit has the hallmarks of a visual system, including spatial light detection and contrast modulation, illustrating how image-forming eyes may have evolved via intermediate stages contrasting only a light and a dark field during a simple visual task. DOI: http://dx.doi.org/10.7554/eLife.02730.001 PMID:24867217
Evaluating a de-cluttering technique for NextGen RNAV and RNP charts
DOT National Transportation Integrated Search
2012-10-14
The authors propose a de-cluttering technique to simplify the depiction of visually complex Area Navigation (RNAV) and Required Navigation Performance (RNP) procedures by reducing the number of paths shown on a single chart page. An experiment was co...
Navigation in a Virtual Environment Using a Walking Interface
2000-11-01
Fukusima, 1993; Mittelstaedt & Glasauer, 1991; Schmuckler, 1995). Thus, only visual information is available for navigation by dead reckoning ( Gallistel ...Washington DC: National Academy Press. Gallistel , C.R. (1990). The Organization of Learning. Cambridge, MA: MIT Press. lwata, H. & Matsuda, K. (1992). Haptic
Smit, Daan; Spruit, Edward; Dankelman, Jenny; Tuijthof, Gabrielle; Hamming, Jaap; Horeman, Tim
2017-01-01
Visual force feedback allows trainees to learn laparoscopic tissue manipulation skills. The aim of this experimental study was to find the most efficient visual force feedback method to acquire these skills. Retention and transfer validity to an untrained task were assessed. Medical students without prior experience in laparoscopy were randomized in three groups: Constant Force Feedback (CFF) (N = 17), Bandwidth Force Feedback (BFF) (N = 16) and Fade-in Force Feedback (N = 18). All participants performed a pretest, training, post-test and follow-up test. The study involved two dissimilar tissue manipulation tasks, one for training and one to assess transferability. Participants performed six trials of the training task. A force platform was used to record several force parameters. A paired-sample t test showed overall lower force parameter outcomes in the post-test compared to the pretest (p < .001). A week later, the force parameter outcomes were still significantly lower than found in the pretest (p < .005). Participants also performed the transfer task in the post-test (p < .02) and follow-up (p < .05) test with lower force parameter outcomes compared to the pretest. A one-way MANOVA indicated that in the post-test the CFF group applied 50 % less Mean Absolute Nonzero Force (p = .005) than the BFF group. All visual force feedback methods showed to be effective in decreasing tissue manipulation force as no major differences were found between groups in the post and follow-up trials. The BFF method is preferred for it respects individual progress and minimizes distraction.
Towards automated visual flexible endoscope navigation.
van der Stap, Nanda; van der Heijden, Ferdinand; Broeders, Ivo A M J
2013-10-01
The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research. A systematic literature search was performed using three general search terms in two medical-technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included. Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date. Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process.
Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments
NASA Astrophysics Data System (ADS)
Portalés, Cristina; Lerma, José Luis; Navarro, Santiago
2010-01-01
Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.
Khanna, Ryan; McDevitt, Joseph L; Abecassis, Zachary A; Smith, Zachary A; Koski, Tyler R; Fessler, Richard G; Dahdaleh, Nader S
2016-10-01
Minimally invasive transforaminal lumbar interbody fusion (TLIF) has undergone significant evolution since its conception as a fusion technique to treat lumbar spondylosis. Minimally invasive TLIF is commonly performed using intraoperative two-dimensional fluoroscopic x-rays. However, intraoperative computed tomography (CT)-based navigation during minimally invasive TLIF is gaining popularity for improvements in visualizing anatomy and reducing intraoperative radiation to surgeons and operating room staff. This is the first study to compare clinical outcomes and cost between these 2 imaging techniques during minimally invasive TILF. For comparison, 28 patients who underwent single-level minimally invasive TLIF using fluoroscopy were matched to 28 patients undergoing single-level minimally invasive TLIF using CT navigation based on race, sex, age, smoking status, payer type, and medical comorbidities (Charlson Comorbidity Index). The minimum follow-up time was 6 months. The 2 groups were compared in regard to clinical outcomes and hospital reimbursement from the payer perspective. Average surgery time, anesthesia time, and hospital length of stay were similar for both groups, but average estimated blood loss was lower in the fluoroscopy group compared with the CT navigation group (154 mL vs. 262 mL; P = 0.016). Oswestry Disability Index, back visual analog scale, and leg visual analog scale scores similarly improved in both groups (P > 0.05) at 6-month follow-up. Cost analysis showed that average hospital payments were similar in the fluoroscopy versus the CT navigation groups ($32,347 vs. $32,656; P = 0.925) as well as payments for the operating room (P = 0.868). Single minimally invasive TLIF performed with fluoroscopy versus CT navigation showed similar clinical outcomes and cost at 6 months. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Yuan, Yifeng; Shen, Huizhong
2016-01-01
This design-based study examines the creation and development of audio-visual Chinese language teaching and learning materials for Australian schools by incorporating users' feedback and content writers' input that emerged in the designing process. Data were collected from workshop feedback of two groups of Chinese-language teachers from primary…
Using Real-Time Visual Feedback to Improve Posture at Computer Workstations
ERIC Educational Resources Information Center
Sigurdsson, Sigurdur O.; Austin, John
2008-01-01
The purpose of the current study was to examine the effects of a multicomponent intervention that included discrimination training, real-time visual feedback, and self-monitoring on postural behavior at a computer workstation in a simulated office environment. Using a nonconcurrent multiple baseline design across 8 participants, the study assessed…
ERIC Educational Resources Information Center
Lin, Huifen
2011-01-01
The purpose of this study was to investigate the relative effectiveness of different types of visuals (static and animated) and instructional strategies (no strategy, questions, and questions plus feedback) used to complement visualized materials on students' learning of different educational objectives in a computer-based instructional (CBI)…
Nam, Seung-Min; Kim, Kyoung; Lee, Do Youn
2018-01-01
[Purpose] This study examined the effects of visual feedback balance training on the balance and ankle instability in adult men with functional ankle instability. [Subjects and Methods] Twenty eight adults with functional ankle instability, divided randomly into an experimental group, which performed visual feedback balance training for 20 minutes and ankle joint exercises for 10 minutes, and a control group, which performed ankle joint exercise for 30 minutes. Exercises were completed three times a week for 8 weeks. Bio rescue was used for balance ability. It measured limit of stability at one minute. For ankle instability was measured using Cumberland ankle instability tool (CAIT). This measure was performed before and after the experiments in each group. [Results] The experimental group had significant increase in the Limit of Stability and CAIT score. The control group had significant increase in CAIT score. While the Limit of Stability increased without significance. [Conclusion] In conclusion, visual feedback balance training can be recommended as a treatment method for patients with functional ankle instability.
The Healthcare Needs of Latinos with Serious Mental Illness and the Potential of Peer Navigators.
Corrigan, Patrick W; Torres, Alessandra; Lara, Juana L; Sheehan, Lindsay; Larson, Jonathon E
2017-07-01
Latinos with serious mental illness get sick and die much younger than other adults. In this paper, we review findings of a community based participatory research project meant to identify important healthcare needs, barriers to these needs, solutions to the barriers, and the promise of peer navigators as a solution. Findings from focus groups reflected general concerns of people with mental illness (e.g., insurance, engagement, accessibility) and Latinos with serious mental illness (e.g., immigration, language, and family). Feedback and analyses especially focused on the potential of peer navigators. Implications of these findings for integrated care of Latinos with serious mental illness are discussed.
Precise visual navigation using multi-stereo vision and landmark matching
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh
2007-04-01
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
Perception of CPR quality: Influence of CPR feedback, Just-in-Time CPR training and provider role.
Cheng, Adam; Overly, Frank; Kessler, David; Nadkarni, Vinay M; Lin, Yiqun; Doan, Quynh; Duff, Jonathan P; Tofil, Nancy M; Bhanji, Farhan; Adler, Mark; Charnovich, Alex; Hunt, Elizabeth A; Brown, Linda L
2015-02-01
Many healthcare providers rely on visual perception to guide cardiopulmonary resuscitation (CPR), but little is known about the accuracy of provider perceptions of CPR quality. We aimed to describe the difference between perceived versus measured CPR quality, and to determine the impact of provider role, real-time visual CPR feedback and Just-in-Time (JIT) CPR training on provider perceptions. We conducted secondary analyses of data collected from a prospective, multicenter, randomized trial of 324 healthcare providers who participated in a simulated cardiac arrest scenario between July 2012 and April 2014. Participants were randomized to one of four permutations of: JIT CPR training and real-time visual CPR feedback. We calculated the difference between perceived and measured quality of CPR and reported the proportion of subjects accurately estimating the quality of CPR within each study arm. Participants overestimated achieving adequate chest compression depth (mean difference range: 16.1-60.6%) and rate (range: 0.2-51%), and underestimated chest compression fraction (0.2-2.9%) across all arms. Compared to no intervention, the use of real-time feedback and JIT CPR training (alone or in combination) improved perception of depth (p<0.001). Accurate estimation of CPR quality was poor for chest compression depth (0-13%), rate (5-46%) and chest compression fraction (60-63%). Perception of depth is more accurate in CPR providers versus team leaders (27.8% vs. 7.4%; p=0.043) when using real-time feedback. Healthcare providers' visual perception of CPR quality is poor. Perceptions of CPR depth are improved by using real-time visual feedback and with prior JIT CPR training. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Batcho, C S; Gagné, M; Bouyer, L J; Roy, J S; Mercier, C
2016-11-19
When subjects learn a novel motor task, several sources of feedback (proprioceptive, visual or auditory) contribute to the performance. Over the past few years, several studies have investigated the role of visual feedback in motor learning, yet evidence remains conflicting. The aim of this study was therefore to investigate the role of online visual feedback (VFb) on the acquisition and retention stages of motor learning associated with training in a reaching task. Thirty healthy subjects made ballistic reaching movements with their dominant arm toward two targets, on 2 consecutive days using a robotized exoskeleton (KINARM). They were randomly assigned to a group with (VFb) or without (NoVFb) VFb of index position during movement. On day 1, the task was performed before (baseline) and during the application of a velocity-dependent resistive force field (adaptation). To assess retention, participants repeated the task with the force field on day 2. Motor learning was characterized by: (1) the final endpoint error (movement accuracy) and (2) the initial angle (iANG) of deviation (motor planning). Even though both groups showed motor adaptation, the NoVFb-group exhibited slower learning and higher final endpoint error than the VFb-group. In some condition, subjects trained without visual feedback used more curved initial trajectories to anticipate for the perturbation. This observation suggests that learning to reach targets in a velocity-dependent resistive force field is possible even when feedback is limited. However, the absence of VFb leads to different strategies that were only apparent when reaching toward the most challenging target. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor
NASA Astrophysics Data System (ADS)
Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso
2018-04-01
Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.
3D visualization of movements can amplify motor cortex activation during subsequent motor imagery
Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele
2015-01-01
A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642
3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.
Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele
2015-01-01
A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.
Comprehension and Navigation of Networked Hypertexts
ERIC Educational Resources Information Center
Blom, Helen; Segers, Eliane; Knoors, Harry; Hermans, Daan; Verhoeven, Ludo
2018-01-01
This study aims to investigate secondary school students' reading comprehension and navigation of networked hypertexts with and without a graphic overview compared to linear digital texts. Additionally, it was studied whether prior knowledge, vocabulary, verbal, and visual working memory moderated the relation between text design and…
Ceux, Tanja; Montagne, Gilles; Buekers, Martinus J
2010-12-01
The present study examined whether the beneficial role of coherently grouped visual motion structures for performing complex (interlimb) coordination patterns can be generalized to synchronization behavior in a visuo-proprioceptive conflict situation. To achieve this goal, 17 participants had to synchronize a self-moved circle, representing the arm movement, with a visual target signal corresponding to five temporally shifted visual feedback conditions (0%, 25%, 50%, 75%, and 100% of the target cycle duration) in three synchronization modes (in-phase, anti-phase, and intermediate). The results showed that the perception of a newly generated perceptual Gestalt between the visual feedback of the arm and the target signal facilitated the synchronization performance in the preferred in-phase synchronization mode in contrast to the less stable anti-phase and intermediate mode. Our findings suggest that the complexity of the synchronization mode defines to what extent the visual and/or proprioceptive information source affects the synchronization performance in the present unimanual synchronization task. Copyright © 2010 Elsevier B.V. All rights reserved.
Analysis of Feedback in after Action Reviews
1987-06-01
CONNTSM Page INTRODUCTIUN . . . . . . . . . . . . . . . . . . . A Perspective on Feedback. . ....... • • ..... • 1 Overviev of %,•urrent Research...part of their training program . The AAR is in marked contrast to the critique method of feedback which is often used in military training. The AAR...feedback is task-inherent feedback. Task-inherent feedback refers to human-machine interacting systems, e.g., computers , where in a visual tracking task
NASA Technical Reports Server (NTRS)
Bergeron, H. P.; Haynie, A. T.; Mcdede, J. B.
1980-01-01
A general aviation single pilot instrument flight rule simulation capability was developed. Problems experienced by single pilots flying in IFR conditions were investigated. The simulation required a three dimensional spatial navaid environment of a flight navigational area. A computer simulation of all the navigational aids plus 12 selected airports located in the Washington/Norfolk area was developed. All programmed locations in the list were referenced to a Cartesian coordinate system with the origin located at a specified airport's reference point. All navigational aids with their associated frequencies, call letters, locations, and orientations plus runways and true headings are included in the data base. The simulation included a TV displayed out-the-window visual scene of country and suburban terrain and a scaled model runway complex. Any of the programmed runways, with all its associated navaids, can be referenced to a runway on the airport in this visual scene. This allows a simulation of a full mission scenario including breakout and landing.
Baweja, Harsimran S.; Patel, Bhavini K.; Neto, Osmar P.; Christou, Evangelos A.
2011-01-01
The purpose of this study was to compare force variability and the neural activation of the agonist muscle during constant isometric contractions at different force levels when the amplitude of respiration and visual feedback were varied. Twenty young adults (20–32 years, 10 men and 10 women) were instructed to accurately match a target force at 15 and 50% of their maximal voluntary contraction (MVC) with abduction of the index finger while controlling their respiration at different amplitudes (85, 100 and 125% normal) in the presence and absence of visual feedback. Each trial lasted 22 s and visual feedback was removed from 8–12 to 16–20 s. Each subject performed 3 trials with each respiratory condition at each force level. Force variability was quantified as the standard deviation of the detrended force data. The neural activation of the first dorsal interosseus (FDI) was measured with bipolar surface electrodes placed distal to the innervation zone. Relative to normal respiration, force variability increased significantly only during high-amplitude respiration (~63%). The increase in force variability from normal- to high-amplitude respiration was strongly associated with amplified force oscillations from 0–3 Hz (R2 ranged from .68 – .84; p < .001). Furthermore, the increase in force variability was exacerbated in the presence of visual feedback at 50% MVC (vision vs. no-vision: .97 vs. .87 N) and was strongly associated with amplified force oscillations from 0–1 Hz (R2 = .82) and weakly associated with greater power from 12–30 Hz (R2 = .24) in the EMG of the agonist muscle. Our findings demonstrate that high-amplitude respiration and visual feedback of force interact and amplify force variability in young adults during moderate levels of effort. PMID:21546109
Prior history of FDI muscle contraction: different effect on MEP amplitude and muscle activity.
Talis, V L; Kazennikov, O V; Castellote, J M; Grishin, A A; Ioffe, M E
2014-03-01
Motor evoked potentials (MEPs) in the right first dorsal interosseous (FDI) muscle elicited by transcranial magnetic stimulation of left motor cortex were assessed in ten healthy subjects during maintenance of a fixed FDI contraction level. Subjects maintained an integrated EMG (IEMG) level with visual feedback and reproduced this level by memory afterwards in the following tasks: stationary FDI muscle contraction at the level of 40 ± 5 % of its maximum voluntary contraction (MVC; 40 % task), at the level of 20 ± 5 % MVC (20 % task), and also when 20 % MVC was preceded by either no contraction (0-20 task), by stronger muscle contraction (40-20 task) or by no contraction with a previous strong contraction (40-0-20 task). The results show that the IEMG level was within the prescribed limits when 20 and 40 % stationary tasks were executed with and without visual feedback. In 0-20, 40-20, and 40-0-20 tasks, 20 % IEMG level was precisely controlled in the presence of visual feedback, but without visual feedback the IEMG and force during 20 % IEMG maintenance were significantly higher in the 40-0-20 task than those in 0-20 and 40-20 tasks. That is, without visual feedback, there were significant variations in muscle activity due to different prehistory of contraction. In stationary tasks, MEP amplitudes in 40 % task were higher than in 20 % task. MEPs did not differ significantly during maintenance of the 20 % level in tasks with different prehistory of muscle contraction with and without visual feedback. Thus, in spite of variations in muscle background activity due to different prehistory of contraction MEPs did not vary significantly. This dissociation suggests that the voluntary maintenance of IEMG level is determined not only by cortical mechanisms, as reflected by corticospinal excitability, but also by lower levels of CNS, where afferent signals and influences from other brain structures and spinal cord are convergent.
Kraemer, David J.M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.
2016-01-01
Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In two experiments, participants watched videos of routes through four virtual cities and were subsequently tested on their memory for observed landmarks and on their ability to make judgments regarding the relative directions of the different landmarks along the route. In the first experiment, self-report questionnaires measuring visual and verbal cognitive styles were administered to examine correlations between cognitive styles, landmark recognition, and judgments of relative direction. Results demonstrate a tradeoff in which the verbal cognitive style is more beneficial for recognizing individual landmarks than for judging relative directions between them, whereas the visual cognitive style is more beneficial for judging relative directions than for landmark recognition. In a second experiment, we manipulated the use of verbal and visual strategies by varying task instructions given to separate groups of participants. Results confirm that a verbal strategy benefits landmark memory, whereas a visual strategy benefits judgments of relative direction. The manipulation of strategy by altering task instructions appears to trump individual differences in cognitive style. Taken together, we find that processing different details during route encoding, whether due to individual proclivities (Experiment 1) or task instructions (Experiment 2), results in benefits for different components of navigation relevant information. These findings also highlight the value of considering multiple sources of individual differences as part of spatial cognition investigations. PMID:27668486
Fitts’ Law in the Control of Isometric Grip Force With Naturalistic Targets
Thumser, Zachary C.; Slifkin, Andrew B.; Beckler, Dylan T.; Marasco, Paul D.
2018-01-01
Fitts’ law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts’ law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts’ law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts’ law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts’ law (average r2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts’ law for explicit targets with vision (r2 = 0.96) and implicit targets (r2 = 0.89), but not as well-described for explicit targets without vision (r2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts’ law to quantify the relative speed-accuracy relationship of any given grasper. PMID:29773999
Visual control of navigation in insects and its relevance for robotics.
Srinivasan, Mandyam V
2011-08-01
Flying insects display remarkable agility, despite their diminutive eyes and brains. This review describes our growing understanding of how these creatures use visual information to stabilize flight, avoid collisions with objects, regulate flight speed, detect and intercept other flying insects such as mates or prey, navigate to a distant food source, and orchestrate flawless landings. It also outlines the ways in which these insights are now being used to develop novel, biologically inspired strategies for the guidance of autonomous, airborne vehicles. Copyright © 2011 Elsevier Ltd. All rights reserved.
Visual Orientation in Unfamiliar Gravito-Inertial Environments
NASA Technical Reports Server (NTRS)
Oman, Charles M.
1999-01-01
The goal of this project is to better understand the process of spatial orientation and navigation in unfamiliar gravito-inertial environments, and ultimately to use this new information to develop effective countermeasures against the orientation and navigation problems experienced by astronauts. How do we know our location, orientation, and motion of our body with respect to the external environment ? On earth, gravity provides a convenient "down" cue. Large body rotations normally occur only in a horizontal plane. In space, the gravitational down cue is absent. When astronauts roll or pitch upside down, they must recognize where things are around them by a process of mental rotation which involves three dimensions, rather than just one. While working in unfamiliar situations they occasionally misinterpret visual cues and experience striking "visual reorientation illusions" (VRIs), in which the walls, ceiling, and floors of the spacecraft exchange subjective identities. VRIs cause disorientation, reaching errors, trigger attacks of space motion sickness, and potentially complicate emergency escape. MIR crewmembers report that 3D relationships between modules - particularly those with different visual verticals - are difficult to visualize, and so navigating through the node that connects them is not instinctive. Crew members learn routes, but their apparent lack of survey knowledge is a concern should fire, power loss, or depressurization limit visibility. Anecdotally, experience in mockups, parabolic flight, neutral buoyancy and virtual reality (VR) simulators helps. However, no techniques have been developed to quantify individual differences in orientation and navigation abilities, or the effectiveness of preflight visual. orientation training. Our understanding of the underlying physiology - for example how our sense of place and orientation is neurally coded in three dimensions in the limbic system of the brain - is incomplete. During the 16 months that this human and animal research project has been underway, we have obtained several results that are not only of basic research interest, but which have practical implications for the architecture and layout of spacecraft interiors and for the development of astronaut spatial orientation training countermeasures.
Mosconi, Matthew W; Mohanty, Suman; Greene, Rachel K; Cook, Edwin H; Vaillancourt, David E; Sweeney, John A
2015-02-04
Sensorimotor abnormalities are common in autism spectrum disorder (ASD) and among the earliest manifestations of the disorder. They have been studied far less than the social-communication and cognitive deficits that define ASD, but a mechanistic understanding of sensorimotor abnormalities in ASD may provide key insights into the neural underpinnings of the disorder. In this human study, we examined rapid, precision grip force contractions to determine whether feedforward mechanisms supporting initial motor output before sensory feedback can be processed are disrupted in ASD. Sustained force contractions also were examined to determine whether reactive adjustments to ongoing motor behavior based on visual feedback are altered. Sustained force was studied across multiple force levels and visual gains to assess motor and visuomotor mechanisms, respectively. Primary force contractions of individuals with ASD showed greater peak rate of force increases and large transient overshoots. Individuals with ASD also showed increased sustained force variability that scaled with force level and was more severe when visual gain was highly amplified or highly degraded. When sustaining a constant force level, their reactive adjustments were more periodic than controls, and they showed increased reliance on slower feedback mechanisms. Feedforward and feedback mechanism alterations each were associated with more severe social-communication impairments in ASD. These findings implicate anterior cerebellar circuits involved in feedforward motor control and posterior cerebellar circuits involved in transforming visual feedback into precise motor adjustments in ASD. Copyright © 2015 the authors 0270-6474/15/352015-11$15.00/0.
Passive haptics in a knee arthroscopy simulator: is it valid for core skills training?
McCarthy, Avril D; Moody, Louise; Waterworth, Alan R; Bickerstaff, Derek R
2006-01-01
Previous investigation of a cost-effective virtual reality arthroscopic training system, the Sheffield Knee Arthroscopy Training System (SKATS), indicated the desirability of including haptic feedback. A formal task analysis confirmed the importance of knee positioning as a core skill for trainees learning to navigate the knee arthroscopically. The system cost and existing limb interface, which permits knee positioning, would be compromised by the addition of commercial active haptic devices available currently. The validation results obtained when passive haptic feedback (resistance provided by physical structures) is provided indicate that SKATS has construct, predictive and face validity for navigation and triangulation training. When tested using SKATS, experienced surgeons (n = 11) performed significantly faster, located significantly more pathologies, and showed significantly shorter arthroscope path lengths than a less experienced surgeon cohort (n = 12). After SKATS training sessions, novices (n = 3) showed significant improvements in: task completion time, shorter arthroscope path lengths, shorter probe path lengths, and fewer arthroscope tip contacts. Main improvements occurred after the first two practice sessions, indicating rapid familiarization and a training effect. Feedback from questionnaires completed by orthopaedic surgeons indicates that the system has face validity for its remit of basic arthroscopic training.
Bakker, Niels H; Passenier, Peter O; Werkhoven, Peter J
2003-01-01
The type of navigation interface in a virtual environment (VE)--head slaved or indirect--determines whether or not proprioceptive feedback stimuli are present during movement. In addition, teleports can be used, which do not provide continuous movement but, rather, discontinuously displace the viewpoint over large distances. A two-part experiment was performed. The first part investigated whether head-slaved navigation provides an advantage for spatial learning in a VE. The second part investigated the role of anticipation when using teleports. The results showed that head-slaved navigation has an advantage over indirect navigation for the acquisition of spatial knowledge in a VE. Anticipating the destination of the teleport prevented disorientation after the displacement to a great extent but not completely. The time that was needed for anticipation increased if the teleport involved a rotation of the viewing direction. This research shows the potential added value of using a head-slaved navigation interface--for example, when using VE for training purposes--and provides practical guidelines for the use of teleports in VE applications.
Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun
2015-08-01
Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
The Influence of Individual Differences on Diagrammatic Communication and Problem Representation
ERIC Educational Resources Information Center
King, Laurel A.
2009-01-01
Understanding the user and customizing the interface to augment cognition and usability are goals of human computer interaction research and design. Yet, little is known about the influence of individual visual-verbal information presentation preferences on visual navigation and screen element usage. If consistent differences in visual navigation…
Development of a force-reflecting robotic platform for cardiac catheter navigation.
Park, Jun Woo; Choi, Jaesoon; Pak, Hui-Nam; Song, Seung Joon; Lee, Jung Chan; Park, Yongdoo; Shin, Seung Min; Sun, Kyung
2010-11-01
Electrophysiological catheters are used for both diagnostics and clinical intervention. To facilitate more accurate and precise catheter navigation, robotic cardiac catheter navigation systems have been developed and commercialized. The authors have developed a novel force-reflecting robotic catheter navigation system. The system is a network-based master-slave configuration having a 3-degree of freedom robotic manipulator for operation with a conventional cardiac ablation catheter. The master manipulator implements a haptic user interface device with force feedback using a force or torque signal either measured with a sensor or estimated from the motor current signal in the slave manipulator. The slave manipulator is a robotic motion control platform on which the cardiac ablation catheter is mounted. The catheter motions-forward and backward movements, rolling, and catheter tip bending-are controlled by electromechanical actuators located in the slave manipulator. The control software runs on a real-time operating system-based workstation and implements the master/slave motion synchronization control of the robot system. The master/slave motion synchronization response was assessed with step, sinusoidal, and arbitrarily varying motion commands, and showed satisfactory performance with insignificant steady-state motion error. The current system successfully implemented the motion control function and will undergo safety and performance evaluation by means of animal experiments. Further studies on the force feedback control algorithm and on an active motion catheter with an embedded actuation mechanism are underway. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Inertial Navigation System Standardized Software Development. Volume 1. Introduction and Summary
1976-06-01
the Loran receiver, the Tacan receiver, the Omega receiver, the satelite based instrumentation, the multimode radar, the star tracker and the visual...accelerometer scale factor, and the barometric altimeter bias. The accuracy (1o values) of typical navigation-aid measurements (other than satelite derived
NASA Technical Reports Server (NTRS)
Comstock, J. R., Jr.; Kirby, R. H.; Coates, G. D.
1984-01-01
Pilot and flight crew assessment of visually displayed information is examined as well as the effects of degraded and uncorrected motion feedback, and instrument scanning efficiency by the pilot. Computerized flight simulation and appropriate physiological measurements are used to collect data for standardization.
The Use of Visual Feedback during Signing: Evidence from Signers with Impaired Vision
ERIC Educational Resources Information Center
Emmorey, Karen; Korpics, Franco; Petronio, Karen
2009-01-01
The role of visual feedback during the production of American Sign Language was investigated by comparing the size of signing space during conversations and narrative monologues for normally sighted signers, signers with tunnel vision due to Usher syndrome, and functionally blind signers. The interlocutor for all groups was a normally sighted deaf…
ERIC Educational Resources Information Center
Squires, James; Wilder, David A.; Fixsen, Amanda; Hess, Erica; Rost, Kristen; Curran, Ryan; Zonneveld, Kimberly
2007-01-01
An intervention consisting of task clarification, visual prompts, and graphic feedback was evaluated to increase customer greeting and up-selling in a restaurant. A combination multiple baseline and reversal design was used to evaluate intervention effects. Although all interventions improved performance over baseline, the delivery of graphic…
ERIC Educational Resources Information Center
Bernhardt, B. May; Bacsfalvi, Penelope; Adler-Bock, Marcy; Shimizu, Reiko; Cheney, Audrey; Giesbrecht, Nathan; O'Connell, Maureen; Sirianni, Jason; Radanov, Bosko
2008-01-01
Ultrasound has shown promise as a visual feedback tool in speech therapy. Rural clients, however, often have minimal access to new technologies. The purpose of the current study was to evaluate consultative treatment using ultrasound in rural communities. Two speech-language pathologists (SLPs) trained in ultrasound use provided consultation with…
Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David
2017-11-01
Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.
Immersive visualization for navigation and control of the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Hartman, Frank R.; Cooper, Brian; Maxwell, Scott; Wright, John; Yen, Jeng
2004-01-01
The Rover Sequencing and Visualization Program (RSVP) is a suite of tools for sequencing of planetary rovers, which are subject to significant light time delay and thus are unsuitable for teleoperation.
Satarasinghe, Praveen; Hamilton, Kojo D; Tarver, Michael J; Buchanan, Robert J; Koltz, Michael T
2018-04-17
Utilization of pedicle screws (PS) for spine stabilization is common in spinal surgery. With reliance on visual inspection of anatomical landmarks prior to screw placement, the free-hand technique requires a high level of surgeon skill and precision. Three-dimensional (3D), computer-assisted virtual neuronavigation improves the precision of PS placement and minimization steps. Twenty-three patients with degenerative, traumatic, or neoplastic pathologies received treatment via a novel three-step PS technique that utilizes a navigated power driver in combination with virtual screw technology. (1) Following visualization of neuroanatomy using intraoperative CT, a navigated 3-mm match stick drill bit was inserted at an anatomical entry point with a screen projection showing a virtual screw. (2) A Navigated Stryker Cordless Driver with an appropriate tap was used to access the vertebral body through a pedicle with a screen projection again showing a virtual screw. (3) A Navigated Stryker Cordless Driver with an actual screw was used with a screen projection showing the same virtual screw. One hundred and forty-four consecutive screws were inserted using this three-step, navigated driver, virtual screw technique. Only 1 screw needed intraoperative revision after insertion using the three-step, navigated driver, virtual PS technique. This amounts to a 0.69% revision rate. One hundred percent of patients had intraoperative CT reconstructed images taken to confirm hardware placement. Pedicle screw placement utilizing the Stryker-Ziehm neuronavigation virtual screw technology with a three step, navigated power drill technique is safe and effective.
Bedi, Harleen; Goltz, Herbert C; Wong, Agnes M F; Chandrakumar, Manokaraananthan; Niechwiej-Szwedo, Ewa
2013-01-01
Errors in eye movements can be corrected during the ongoing saccade through in-flight modifications (i.e., online control), or by programming a secondary eye movement (i.e., offline control). In a reflexive saccade task, the oculomotor system can use extraretinal information (i.e., efference copy) online to correct errors in the primary saccade, and offline retinal information to generate a secondary corrective saccade. The purpose of this study was to examine the error correction mechanisms in the antisaccade task. The roles of extraretinal and retinal feedback in maintaining eye movement accuracy were investigated by presenting visual feedback at the spatial goal of the antisaccade. We found that online control for antisaccade is not affected by the presence of visual feedback; that is whether visual feedback is present or not, the duration of the deceleration interval was extended and significantly correlated with reduced antisaccade endpoint error. We postulate that the extended duration of deceleration is a feature of online control during volitional saccades to improve their endpoint accuracy. We found that secondary saccades were generated more frequently in the antisaccade task compared to the reflexive saccade task. Furthermore, we found evidence for a greater contribution from extraretinal sources of feedback in programming the secondary "corrective" saccades in the antisaccade task. Nonetheless, secondary saccades were more corrective for the remaining antisaccade amplitude error in the presence of visual feedback of the target. Taken together, our results reveal a distinctive online error control strategy through an extension of the deceleration interval in the antisaccade task. Target feedback does not improve online control, rather it improves the accuracy of secondary saccades in the antisaccade task.
Bedi, Harleen; Goltz, Herbert C.; Wong, Agnes M. F.; Chandrakumar, Manokaraananthan; Niechwiej-Szwedo, Ewa
2013-01-01
Errors in eye movements can be corrected during the ongoing saccade through in-flight modifications (i.e., online control), or by programming a secondary eye movement (i.e., offline control). In a reflexive saccade task, the oculomotor system can use extraretinal information (i.e., efference copy) online to correct errors in the primary saccade, and offline retinal information to generate a secondary corrective saccade. The purpose of this study was to examine the error correction mechanisms in the antisaccade task. The roles of extraretinal and retinal feedback in maintaining eye movement accuracy were investigated by presenting visual feedback at the spatial goal of the antisaccade. We found that online control for antisaccade is not affected by the presence of visual feedback; that is whether visual feedback is present or not, the duration of the deceleration interval was extended and significantly correlated with reduced antisaccade endpoint error. We postulate that the extended duration of deceleration is a feature of online control during volitional saccades to improve their endpoint accuracy. We found that secondary saccades were generated more frequently in the antisaccade task compared to the reflexive saccade task. Furthermore, we found evidence for a greater contribution from extraretinal sources of feedback in programming the secondary “corrective” saccades in the antisaccade task. Nonetheless, secondary saccades were more corrective for the remaining antisaccade amplitude error in the presence of visual feedback of the target. Taken together, our results reveal a distinctive online error control strategy through an extension of the deceleration interval in the antisaccade task. Target feedback does not improve online control, rather it improves the accuracy of secondary saccades in the antisaccade task. PMID:23936308
Weakley, Jonathon Js; Wilson, Kyle M; Till, Kevin; Read, Dale B; Darrall-Jones, Joshua; Roe, Gregory; Phibbs, Padraic J; Jones, Ben
2017-07-12
It is unknown whether instantaneous visual feedback of resistance training outcomes can enhance barbell velocity in younger athletes. Therefore, the purpose of this study was to quantify the effects of visual feedback on mean concentric barbell velocity in the back squat, and to identify changes in motivation, competitiveness, and perceived workload. In a randomised-crossover design (Feedback vs. Control) feedback of mean concentric barbell velocity was or was not provided throughout a set of 10 repetitions in the barbell back squat. Magnitude-based inferences were used to assess changes between conditions, with almost certainly greater differences in mean concentric velocity between the Feedback (0.70 ±0.04 m·s) and Control (0.65 ±0.05 m·s) observed. Additionally, individual repetition mean concentric velocity ranged from possibly (repetition number two: 0.79 ±0.04 vs. 0.78 ±0.04 m·s) to almost certainly (repetition number 10: 0.58 ±0.05 vs. 0.49 ±0.05 m·s) greater when provided feedback, while almost certain differences were observed in motivation, competitiveness, and perceived workload, respectively. Providing adolescent male athletes with visual kinematic information while completing resistance training is beneficial for the maintenance of barbell velocity during a training set, potentially enhancing physical performance. Moreover, these improvements were observed alongside increases in motivation, competitiveness and perceived workload providing insight into the underlying mechanisms responsible for the performance gains observed. Given the observed maintenance of barbell velocity during a training set, practitioners can use this technique to manipulate training outcomes during resistance training.
Real-time decoding of the direction of covert visuospatial attention
NASA Astrophysics Data System (ADS)
Andersson, Patrik; Ramsey, Nick F.; Raemaekers, Mathijs; Viergever, Max A.; Pluim, Josien P. W.
2012-08-01
Brain-computer interfaces (BCIs) make it possible to translate a person’s intentions into actions without depending on the muscular system. Brain activity is measured and classified into commands, thereby creating a direct link between the mind and the environment, enabling, e.g., cursor control or navigation of a wheelchair or robot. Most BCI research is conducted with scalp EEG but recent developments move toward intracranial electrodes for paralyzed people. The vast majority of BCI studies focus on the motor system as the appropriate target for recording and decoding movement intentions. However, properties of the visual system may make the visual system an attractive and intuitive alternative. We report on a study investigating feasibility of decoding covert visuospatial attention in real time, exploiting the full potential of a 7 T MRI scanner to obtain the necessary signal quality, capitalizing on earlier fMRI studies indicating that covert visuospatial attention changes activity in the visual areas that respond to stimuli presented in the attended area of the visual field. Healthy volunteers were instructed to shift their attention from the center of the screen to one of four static targets in the periphery, without moving their eyes from the center. During the first part of the fMRI-run, the relevant brain regions were located using incremental statistical analysis. During the second part, the activity in these regions was extracted and classified, and the subject was given visual feedback of the result. Performance was assessed as the number of trials where the real-time classifier correctly identified the direction of attention. On average, 80% of trials were correctly classified (chance level <25%) based on a single image volume, indicating very high decoding performance. While we restricted the experiment to five attention target regions (four peripheral and one central), the number of directions can be higher provided the brain activity patterns can be distinguished. In summary, the visual system promises to be an effective target for BCI control.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1990-01-01
A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.
Mert, Aygül; Kiesel, Barbara; Wöhrer, Adelheid; Martínez-Moreno, Mauricio; Minchev, Georgi; Furtner, Julia; Knosp, Engelbert; Wolfsberger, Stefan; Widhalm, Georg
2015-01-01
OBJECT Surgery of suspected low-grade gliomas (LGGs) poses a special challenge for neurosurgeons due to their diffusely infiltrative growth and histopathological heterogeneity. Consequently, neuronavigation with multimodality imaging data, such as structural and metabolic data, fiber tracking, and 3D brain visualization, has been proposed to optimize surgery. However, currently no standardized protocol has been established for multimodality imaging data in modern glioma surgery. The aim of this study was therefore to define a specific protocol for multimodality imaging and navigation for suspected LGG. METHODS Fifty-one patients who underwent surgery for a diffusely infiltrating glioma with nonsignificant contrast enhancement on MRI and available multimodality imaging data were included. In the first 40 patients with glioma, the authors retrospectively reviewed the imaging data, including structural MRI (contrast-enhanced T1-weighted, T2-weighted, and FLAIR sequences), metabolic images derived from PET, or MR spectroscopy chemical shift imaging, fiber tracking, and 3D brain surface/vessel visualization, to define standardized image settings and specific indications for each imaging modality. The feasibility and surgical relevance of this new protocol was subsequently prospectively investigated during surgery with the assistance of an advanced electromagnetic navigation system in the remaining 11 patients. Furthermore, specific surgical outcome parameters, including the extent of resection, histological analysis of the metabolic hotspot, presence of a new postoperative neurological deficit, and intraoperative accuracy of 3D brain visualization models, were assessed in each of these patients. RESULTS After reviewing these first 40 cases of glioma, the authors defined a specific protocol with standardized image settings and specific indications that allows for optimal and simultaneous visualization of structural and metabolic data, fiber tracking, and 3D brain visualization. This new protocol was feasible and was estimated to be surgically relevant during navigation-guided surgery in all 11 patients. According to the authors' predefined surgical outcome parameters, they observed a complete resection in all resectable gliomas (n = 5) by using contour visualization with T2-weighted or FLAIR images. Additionally, tumor tissue derived from the metabolic hotspot showed the presence of malignant tissue in all WHO Grade III or IV gliomas (n = 5). Moreover, no permanent postoperative neurological deficits occurred in any of these patients, and fiber tracking and/or intraoperative monitoring were applied during surgery in the vast majority of cases (n = 10). Furthermore, the authors found a significant intraoperative topographical correlation of 3D brain surface and vessel models with gyral anatomy and superficial vessels. Finally, real-time navigation with multimodality imaging data using the advanced electromagnetic navigation system was found to be useful for precise guidance to surgical targets, such as the tumor margin or the metabolic hotspot. CONCLUSIONS In this study, the authors defined a specific protocol for multimodality imaging data in suspected LGGs, and they propose the application of this new protocol for advanced navigation-guided procedures optimally in conjunction with continuous electromagnetic instrument tracking to optimize glioma surgery.
A survey of telerobotic surface finishing
NASA Astrophysics Data System (ADS)
Höglund, Thomas; Alander, Jarmo; Mantere, Timo
2018-05-01
This is a survey of research published on the subjects of telerobotics, haptic feedback, and mixed reality applied to surface finishing. The survey especially focuses on how visuo-haptic feedback can be used to improve a grinding process using a remote manipulator or robot. The benefits of teleoperation and reasons for using haptic feedback are presented. The use of genetic algorithms for optimizing haptic sensing is briefly discussed. Ways of augmenting the operator's vision are described. Visual feedback can be used to find defects and analyze the quality of the surface resulting from the surface finishing process. Visual cues can also be used to aid a human operator in manipulating a robot precisely and avoiding collisions.
Real-Time Performance Feedback for the Manual Control of Spacecraft
NASA Astrophysics Data System (ADS)
Karasinski, John Austin
Real-time performance metrics were developed to quantify workload, situational awareness, and manual task performance for use as visual feedback to pilots of aerospace vehicles. Results from prior lunar lander experiments with variable levels of automation were replicated and extended to provide insights for the development of real-time metrics. Increased levels of automation resulted in increased flight performance, lower workload, and increased situational awareness. Automated Speech Recognition (ASR) was employed to detect verbal callouts as a limited measure of subjects' situational awareness. A one-dimensional manual tracking task and simple instructor-model visual feedback scheme was developed. This feedback was indicated to the operator by changing the color of a guidance element on the primary flight display, similar to how a flight instructor points out elements of a display to a student pilot. Experiments showed that for this low-complexity task, visual feedback did not change subject performance, but did increase the subjects' measured workload. Insights gained from these experiments were applied to a Simplified Aid for EVA Rescue (SAFER) inspection task. The effects of variations of an instructor-model performance-feedback strategy on human performance in a novel SAFER inspection task were investigated. Real-time feedback was found to have a statistically significant effect of improving subject performance and decreasing workload in this complicated four degree of freedom manual control task with two secondary tasks.
Western, Max J.; Peacock, Oliver J.; Stathi, Afroditi; Thompson, Dylan
2015-01-01
Background Innovative physical activity monitoring technology can be used to depict rich visual feedback that encompasses the various aspects of physical activity known to be important for health. However, it is unknown whether patients who are at risk of chronic disease would understand such sophisticated personalised feedback or whether they would find it useful and motivating. The purpose of the present study was to determine whether technology-enabled multidimensional physical activity graphics and visualisations are comprehensible and usable for patients at risk of chronic disease. Method We developed several iterations of graphics depicting minute-by-minute activity patterns and integrated physical activity health targets. Subsequently, patients at moderate/high risk of chronic disease (n=29) and healthcare practitioners (n=15) from South West England underwent full 7-days activity monitoring followed by individual semi-structured interviews in which they were asked to comment on their own personalised visual feedback Framework analysis was used to gauge their interpretation and of personalised feedback, graphics and visualisations. Results We identified two main components focussing on (a) the interpretation of feedback designs and data and (b) the impact of personalised visual physical activity feedback on facilitation of health behaviour change. Participants demonstrated a clear ability to understand the sophisticated personal information plus an enhanced physical activity knowledge. They reported that receiving multidimensional feedback was motivating and could be usefully applied to facilitate their efforts in becoming more physically active. Conclusion Multidimensional physical activity feedback can be made comprehensible, informative and motivational by using appropriate graphics and visualisations. There is an opportunity to exploit the full potential created by technological innovation and provide sophisticated personalised physical activity feedback as an adjunct to support behaviour change. PMID:25938455
Measurement of electromagnetic tracking error in a navigated breast surgery setup
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor
2016-03-01
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
Wei, Peng-Hu; Cong, Fei; Chen, Ge; Li, Ming-Chu; Yu, Xin-Guang; Bao, Yu-Hai
2017-02-01
Diffusion tensor imaging-based navigation is unable to resolve crossing fibers or to determine with accuracy the fanning, origin, and termination of fibers. It is important to improve the accuracy of localizing white matter fibers for improved surgical approaches. We propose a solution to this problem using navigation based on track density imaging extracted from high-definition fiber tractography (HDFT). A 28-year-old asymptomatic female patient with a left-lateral ventricle meningioma was enrolled in the present study. Language and visual tests, magnetic resonance imaging findings, both preoperative and postoperative HDFT, and the intraoperative navigation and surgery process are presented. Track density images were extracted from tracts derived using full q-space (514 directions) diffusion spectrum imaging (DSI) and integrated into a neuronavigation system. Navigation accuracy was verified via intraoperative records and postoperative DSI tractography, as well as a functional examination. DSI successfully represented the shape and range of the Meyer loop and arcuate fasciculus. Extracted track density images from the DSI were successfully integrated into the navigation system. The relationship between the operation channel and surrounding tracts was consistent with the postoperative findings, and the patient was functionally intact after the surgery. DSI-based TDI navigation allows for the visualization of anatomic features such as fanning and angling and helps to identify the range of a given tract. Moreover, our results show that our HDFT navigation method is a promising technique that preserves neural function. Copyright © 2016 Elsevier Inc. All rights reserved.
Diller, Kyle I; Bayden, Alexander S; Audie, Joseph; Diller, David J
2018-01-01
There is growing interest in peptide-based drug design and discovery. Due to their relatively large size, polymeric nature, and chemical complexity, the design of peptide-based drugs presents an interesting "big data" challenge. Here, we describe an interactive computational environment, PeptideNavigator, for naturally exploring the tremendous amount of information generated during a peptide drug design project. The purpose of PeptideNavigator is the presentation of large and complex experimental and computational data sets, particularly 3D data, so as to enable multidisciplinary scientists to make optimal decisions during a peptide drug discovery project. PeptideNavigator provides users with numerous viewing options, such as scatter plots, sequence views, and sequence frequency diagrams. These views allow for the collective visualization and exploration of many peptides and their properties, ultimately enabling the user to focus on a small number of peptides of interest. To drill down into the details of individual peptides, PeptideNavigator provides users with a Ramachandran plot viewer and a fully featured 3D visualization tool. Each view is linked, allowing the user to seamlessly navigate from collective views of large peptide data sets to the details of individual peptides with promising property profiles. Two case studies, based on MHC-1A activating peptides and MDM2 scaffold design, are presented to demonstrate the utility of PeptideNavigator in the context of disparate peptide-design projects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion
Smith, Ross T.; Hunter, Estin V.; Davis, Miles G.; Sterling, Michele; Moseley, G. Lorimer
2017-01-01
Background Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. Method In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi)—thus simulating more or less movement than that actually occurring. At 50o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360o immersive virtual reality with and without three-dimensional properties, was also investigated. Results Perception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Discussion Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain. PMID:28243537
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion.
Harvie, Daniel S; Smith, Ross T; Hunter, Estin V; Davis, Miles G; Sterling, Michele; Moseley, G Lorimer
2017-01-01
Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can't be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50 o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%-200%-the Motor Offset Visual Illusion (MoOVi)-thus simulating more or less movement than that actually occurring. At 50 o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360 o immersive virtual reality with and without three-dimensional properties, was also investigated. Perception of head movement was dependent on visual-kinaesthetic feedback ( p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain.
78 FR 24817 - Visual-Manual NHTSA Driver Distraction Guidelines for In-Vehicle Electronic Devices
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-26
...The National Highway Traffic Safety Administration (NHTSA) is concerned about the effects of distraction on motor vehicle safety due to drivers' use of electronic devices. Consequently, NHTSA is issuing nonbinding, voluntary Driver Distraction Guidelines (NHTSA Guidelines) to promote safety by discouraging the introduction of excessively distracting devices in vehicles. This notice announces the issuance of the final version of the first phase of the NHTSA Guidelines. This first phase applies to original equipment (OE) in-vehicle electronic devices used by the driver to perform secondary tasks (communications, entertainment, information gathering, navigation tasks, etc. are considered secondary tasks) through visual-manual means (i.e., the driver looks at a device, manipulates a device-related control with his or her hand, and/or watches for visual feedback). The NHTSA Guidelines list certain secondary tasks believed by the agency to interfere inherently with a driver's ability to safely control the vehicle. The NHTSA Guidelines recommend that in-vehicle devices be designed so that they cannot be used by the driver to perform these inherently distracting secondary tasks while driving. For all other visual-manual secondary tasks, the NHTSA Guidelines specify a test method for measuring eye glance behavior during those tasks. Eye glance metrics are compared to acceptance criteria to evaluate whether a task interferes too much with driver attention, rendering it unsuitable for a driver to perform while driving. If a task does not meet the acceptance criteria, the NHTSA Guidelines recommend that the task be made inaccessible for performance by the driver while driving. In addition, the NHTSA Guidelines contain several recommendations to limit and reduce the potential for distraction associated with the use of OE in-vehicle electronic devices.
A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots
Pan, Shaowu; Shi, Liwei; Guo, Shuxiang
2015-01-01
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331
Pseudohaptic interaction with knot diagrams
NASA Astrophysics Data System (ADS)
Weng, Jianguang; Zhang, Hui
2012-07-01
To make progress in understanding knot theory, we need to interact with the projected representations of mathematical knots, which are continuous in three dimensions (3-D) but significantly interrupted in the projective images. One way to achieve such a goal is to design an interactive system that allows us to sketch two-dimensional (2-D) knot diagrams by taking advantage of a collision-sensing controller and explore their underlying smooth structures through a continuous motion. Recent advances of interaction techniques have been made that allow progress in this direction. Pseudohaptics that simulate haptic effects using pure visual feedback can be used to develop such an interactive system. We outline one such pseudohaptic knot diagram interface. Our interface derives from the familiar pencil-and-paper process of drawing 2-D knot diagrams and provides haptic-like sensations to facilitate the creation and exploration of knot diagrams. A centerpiece of the interaction model simulates a physically reactive mouse cursor, which is exploited to resolve the apparent conflict between the continuous structure of the actual smooth knot and the visual discontinuities in the knot diagram representation. Another value in exploiting pseudohaptics is that an acceleration (or deceleration) of the mouse cursor (or surface locator) can be used to indicate the slope of the curve (or surface) of which the projective image is being explored. By exploiting these additional visual cues, we proceed to a full-featured extension to a pseudohaptic four-dimensional (4-D) visualization system that simulates the continuous navigation on 4-D objects and allows us to sense the bumps and holes in the fourth dimension. Preliminary tests of the software show that main features of the interface overcome some expected perceptual limitations in our interaction with 2-D knot diagrams of 3-D knots and 3-D projective images of 4-D mathematical objects.
Silverstein, Jonathan C; Dech, Fred; Kouchoukos, Philip L
2004-01-01
Radiological volumes are typically reviewed by surgeons using cross-sections and iso-surface reconstructions. Applications that combine collaborative stereo volume visualization with symbolic anatomic information and data fusions would expand surgeons' capabilities in interpretation of data and in planning treatment. Such an application has not been seen clinically. We are developing methods to systematically combine symbolic anatomy (term hierarchies and iso-surface atlases) with patient data using data fusion. We describe our progress toward integrating these methods into our collaborative virtual reality application. The fully combined application will be a feature-rich stereo collaborative volume visualization environment for use by surgeons in which DICOM datasets will self-report underlying anatomy with visual feedback. Using hierarchical navigation of SNOMED-CT anatomic terms integrated with our existing Tele-immersive DICOM-based volumetric rendering application, we will display polygonal representations of anatomic systems on the fly from menus that query a database. The methods and tools involved in this application development are SNOMED-CT, DICOM, VISIBLE HUMAN, volumetric fusion and C++ on a Tele-immersive platform. This application will allow us to identify structures and display polygonal representations from atlas data overlaid with the volume rendering. First, atlas data is automatically translated, rotated, and scaled to the patient data during loading using a public domain volumetric fusion algorithm. This generates a modified symbolic representation of the underlying canonical anatomy. Then, through the use of collision detection or intersection testing of various transparent polygonal representations, the polygonal structures are highlighted into the volumetric representation while the SNOMED names are displayed. Thus, structural names and polygonal models are associated with the visualized DICOM data. This novel juxtaposition of information promises to expand surgeons' abilities to interpret images and plan treatment.
NASA Astrophysics Data System (ADS)
Moysey, S. M.; Boyer, D. M.; Mobley, C.; Byrd, V. L.
2014-12-01
It is increasingly common to utilize simulations and games in the classroom, but learning opportunities can also be created by having students construct these cyberinfrastructure resources themselves. We outline two examples of such projects completed during the summer of 2014 within the NSF ACI sponsored REU Site: Research Experiences for Undergraduates in Collaborative Data Visualization Applications at Clemson University (Award 1359223). The first project focuses on the development of immersive virtual reality field trips of geologic sites using the Oculus Rift headset. This project developed a platform which will allow users to navigate virtual terrains derived from real-world data obtained from the US Geological Survey and Google Earth. The system provides users with the ability to partake in an interactive first-person exploration of a region, such as the Grand Canyon, and thus makes an important educational contribution for students without access to these environmental assets in the real world. The second project focused on providing players visual feedback about the sustainability of their practices within the web-based, multiplayer watershed management game Naranpur Online. Identifying sustainability indicators that communicate meaningful information to players and finding an effective way to visualize these data were a primary challenge faced by the student researcher working on this project. To solve this problem the student translated findings from the literature to the context of the game to develop a hierarchical set of relative sustainability criteria to be accessed by players within a sustainability dashboard. Though the REU focused on visualization, both projects forced the students to transform their thinking to address higher-level questions regarding the utilization and communication of environmental data or concepts, thus enhancing the educational experience for themselves and future students.
A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.
Pan, Shaowu; Shi, Liwei; Guo, Shuxiang
2015-04-08
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.
Development of the navigation system for visually impaired.
Harada, Tetsuya; Kaneko, Yuki; Hirahara, Yoshiaki; Yanashima, Kenji; Magatani, Kazushige
2004-01-01
A white cane is a typical support instrument for the visually impaired. They use a white cane for the detection of obstacles while walking. So, the area where they have a mental map, they can walk using white cane without the help of others. However, they cannot walk independently in the unknown area, even if they use a white cane. Because, a white cane is a detecting device for obstacles and not a navigation device for their correct route. Now, we are developing the navigation system for the visually impaired which uses indoor space. In Japan, sometimes colored guide lines to the destination is used for a normal person. These lines are attached on the floor, we can reach the destination, if we walk along one of these line. In our system, a developed new white cane senses one colored guide line, and make notice to an user by vibration. This system recognizes the line of the color stuck on the floor by the optical sensor attached in the white cane. And in order to guide still more smoothly, infrared beacons (optical beacon), which can perform voice guidance, are also used.
Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation
Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin
2014-01-01
Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780
Rover-based visual target tracking validation and mission infusion
NASA Technical Reports Server (NTRS)
Kim, Won S.; Steele, Robert D.; Ansar, Adnan I.; Ali, Khaled; Nesnas, Issa
2005-01-01
The Mars Exploration Rovers (MER'03), Spirit and Opportunity, represent the state of the art in rover operations on Mars. This paper presents validation experiments of different visual tracking algorithms using the rover's navigation camera.
van den Heuvel, Maarten R C; van Wegen, Erwin E H; de Goede, Cees J T; Burgers-Bots, Ingrid A L; Beek, Peter J; Daffertshofer, Andreas; Kwakkel, Gert
2013-10-04
Patients with Parkinson's disease often suffer from reduced mobility due to impaired postural control. Balance exercises form an integral part of rehabilitative therapy but the effectiveness of existing interventions is limited. Recent technological advances allow for providing enhanced visual feedback in the context of computer games, which provide an attractive alternative to conventional therapy. The objective of this randomized clinical trial is to investigate whether a training program capitalizing on virtual-reality-based visual feedback is more effective than an equally-dosed conventional training in improving standing balance performance in patients with Parkinson's disease. Patients with idiopathic Parkinson's disease will participate in a five-week balance training program comprising ten treatment sessions of 60 minutes each. Participants will be randomly allocated to (1) an experimental group that will receive balance training using augmented visual feedback, or (2) a control group that will receive balance training in accordance with current physical therapy guidelines for Parkinson's disease patients. Training sessions consist of task-specific exercises that are organized as a series of workstations. Assessments will take place before training, at six weeks, and at twelve weeks follow-up. The functional reach test will serve as the primary outcome measure supplemented by comprehensive assessments of functional balance, posturography, and electroencephalography. We hypothesize that balance training based on visual feedback will show greater improvements on standing balance performance than conventional balance training. In addition, we expect that learning new control strategies will be visible in the co-registered posturographic recordings but also through changes in functional connectivity.
Gnadt, William; Grossberg, Stephen
2008-06-01
How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and size-invariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.
Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA
2008-05-13
A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.
Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA
2012-03-06
A method of displaying correlations among information objects includes receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.
Völter, Christoph J; Call, Josep
2012-09-01
What kind of information animals use when solving problems is a controversial topic. Previous research suggests that, in some situations, great apes prefer to use causally relevant cues over arbitrary ones. To further examine to what extent great apes are able to use information about causal relations, we presented three different puzzle box problems to the four nonhuman great ape species. Of primary interest here was a comparison between one group of apes that received visual access to the functional mechanisms of the puzzle boxes and one group that did not. Apes' performance in the first two, less complex puzzle boxes revealed that they are able to solve such problems by means of trial-and-error learning, requiring no information about the causal structure of the problem. However, visual inspection of the functional mechanisms of the puzzle boxes reduced the amount of time needed to solve the problems. In the case of the most complex problem, which required the use of a crank, visual feedback about what happened when the handle of the crank was turned was necessary for the apes to solve the task. Once the solution was acquired, however, visual feedback was no longer required. We conclude that visual feedback about the consequences of their actions helps great apes to solve complex problems. As the crank task matches the basic requirements of vertical string pulling in birds, the present results are discussed in light of recent findings with corvids.
NASA Astrophysics Data System (ADS)
Cheng, Xiang-Qin; Qu, Jing-Yuan; Yan, Zhe-Ping; Bian, Xin-Qian
2010-03-01
In order to improve the security and reliability for autonomous underwater vehicle (AUV) navigation, an H∞ robust fault-tolerant controller was designed after analyzing variations in state-feedback gain. Operating conditions and the design method were then analyzed so that the control problem could be expressed as a mathematical optimization problem. This permitted the use of linear matrix inequalities (LMI) to solve for the H∞ controller for the system. When considering different actuator failures, these conditions were then also mathematically expressed, allowing the H∞ robust controller to solve for these events and thus be fault-tolerant. Finally, simulation results showed that the H∞ robust fault-tolerant controller could provide precise AUV navigation control with strong robustness.
ERIC Educational Resources Information Center
Desmurget, Michel; Turner, Robert S.; Prablanc, Claude; Russo, Gary S.; Alexander, Garret E.; Grafton, Scott T.
2005-01-01
Six results are reported. (a) Reaching accuracy increases when visual capture of the target is allowed (e.g., target on vs. target off at saccade onset). (b) Whatever the visual condition, trajectories diverge only after peak acceleration, suggesting that accuracy is improved through feedback mechanisms. (c) Feedback corrections are smoothly…
ERIC Educational Resources Information Center
Bekker, Marthinus J.; Cumming, Tania D.; Osborne, Nikola K. P.; Bruining, Angela M.; McClean, Julia I.; Leland, Louis S., Jr.
2010-01-01
This experiment investigated the combined use of visual prompts, daily feedback, and rewards to reduce electricity consumption in a university residential hall. After a 17-day baseline period, the experimental intervention was introduced in the intervention hall, and no change was made in the control hall. Energy usage decreased in the…
ERIC Educational Resources Information Center
JENSON, PAUL G.; WESTERMEIER, FRANZ X.
A RESEARCH PROJECT USING THE OSCILLOSCOPE TO DETERMINE VISUAL FEEDBACK IN THE TEACHING OF FOREIGN LANGUAGE PRONUNCIATION WAS TERMINATED BECAUSE OF TECHNICAL DIFFICULTIES THAT COULD NOT BE RESOLVED WITH THE EQUIPMENT AVAILABLE. FAILURE IS ATTRIBUTED TO SUCH FACTORS AS (1) THE SPEECH SOUND WAVES SOUND THE SAME THOUGH THEIR WAVE SHAPES DIFFER, (2)…
The Impact of Feedback on Self-Rated Driving Ability and Driving Self-Regulation among Older Adults
ERIC Educational Resources Information Center
Ackerman, Michelle L.; Crowe, Michael; Vance, David E.; Wadley, Virginia G.; Owsley, Cynthia; Ball, Karlene K.
2011-01-01
In 129 community-dwelling older adults, feedback regarding qualification for an insurance discount (based on a visual speed of processing test; Useful Field of View) was examined as a prospective predictor of change in self-reported driving ability, driving avoidance, and driving exposure over 3 months, along with physical, visual, health, and…
Ultrasound visual feedback treatment and practice variability for residual speech sound errors
Preston, Jonathan L.; McCabe, Patricia; Rivera-Campos, Ahmed; Whittle, Jessica L.; Landry, Erik; Maas, Edwin
2014-01-01
Purpose The goals were to (1) test the efficacy of a motor-learning based treatment that includes ultrasound visual feedback for individuals with residual speech sound errors, and (2) explore whether the addition of prosodic cueing facilitates speech sound learning. Method A multiple baseline single subject design was used, replicated across 8 participants. For each participant, one sound context was treated with ultrasound plus prosodic cueing for 7 sessions, and another sound context was treated with ultrasound but without prosodic cueing for 7 sessions. Sessions included ultrasound visual feedback as well as non-ultrasound treatment. Word-level probes assessing untreated words were used to evaluate retention and generalization. Results For most participants, increases in accuracy of target sound contexts at the word level were observed with the treatment program regardless of whether prosodic cueing was included. Generalization between onset singletons and clusters was observed, as well as generalization to sentence-level accuracy. There was evidence of retention during post-treatment probes, including at a two-month follow-up. Conclusions A motor-based treatment program that includes ultrasound visual feedback can facilitate learning of speech sounds in individuals with residual speech sound errors. PMID:25087938
Prism adaptation in virtual and natural contexts: Evidence for a flexible adaptive process.
Veilleux, Louis-Nicolas; Proteau, Luc
2015-01-01
Prism exposure when aiming at a visual target in a virtual condition (e.g., when the hand is represented by a video representation) produces no or only small adaptations (after-effects), whereas prism exposure in a natural condition produces large after-effects. Some researchers suggested that this difference may arise from distinct adaptive processes, but other studies suggested a unique process. The present study reconciled these conflicting interpretations. Forty participants were divided into two groups: One group used visual feedback of their hand (natural context), and the other group used computer-generated representational feedback (virtual context). Visual feedback during adaptation was concurrent or terminal. All participants underwent laterally displacing prism perturbation. The results showed that the after-effects were twice as large in the "natural context" than in the "virtual context". No significant differences were observed between the concurrent and terminal feedback conditions. The after-effects generalized to untested targets and workspace. These results suggest that prism adaptation in virtual and natural contexts involves the same process. The smaller after-effects in the virtual context suggest that the depth of adaptation is a function of the degree of convergence between the proprioceptive and visual information that arises from the hand.
Semi-Immersive Virtual Turbine Engine Simulation System
NASA Astrophysics Data System (ADS)
Abidi, Mustufa H.; Al-Ahmari, Abdulrahman M.; Ahmad, Ali; Darmoul, Saber; Ameen, Wadea
2018-05-01
The design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.
NASA Astrophysics Data System (ADS)
Daly, Ian; Blanchard, Caroline; Holmes, Nicholas P.
2018-04-01
Objective. Brain-computer interfaces (BCIs) based on motor control have been suggested as tools for stroke rehabilitation. Some initial successes have been achieved with this approach, however the mechanism by which they work is not yet fully understood. One possible part of this mechanism is a, previously suggested, relationship between the strength of the event-related desynchronization (ERD), a neural correlate of motor imagination and execution, and corticospinal excitability. Additionally, a key component of BCIs used in neurorehabilitation is the provision of visual feedback to positively reinforce attempts at motor control. However, the ability of visual feedback of the ERD to modulate the activity in the motor system has not been fully explored. Approach. We investigate these relationships via transcranial magnetic stimulation delivered at different moments in the ongoing ERD related to hand contraction and relaxation during BCI control of a visual feedback bar. Main results. We identify a significant relationship between ERD strength and corticospinal excitability, and find that our visual feedback does not affect corticospinal excitability. Significance. Our results imply that efforts to promote functional recovery in stroke by targeting increases in corticospinal excitability may be aided by accounting for the time course of the ERD.
An Empirical Comparison of Visualization Tools To Assist Information Retrieval on the Web.
ERIC Educational Resources Information Center
Heo, Misook; Hirtle, Stephen C.
2001-01-01
Discusses problems with navigation in hypertext systems, including cognitive overload, and describes a study that tested information visualization techniques to see which best represented the underlying structure of Web space. Considers the effects of visualization techniques on user performance on information searching tasks and the effects of…
Fingerprints selection for topological localization
NASA Astrophysics Data System (ADS)
Popov, Vladimir
2017-07-01
Problems of visual navigation are extensively studied in contemporary robotics. In particular, we can mention different problems of visual landmarks selection, the problem of selection of a minimal set of visual landmarks, selection of partially distinguishable guards, the problem of placement of visual landmarks. In this paper, we consider one-dimensional color panoramas. Such panoramas can be used for creating fingerprints. Fingerprints give us unique identifiers for visually distinct locations by recovering statistically significant features. Fingerprints can be used as visual landmarks for the solution of various problems of mobile robot navigation. In this paper, we consider a method for automatic generation of fingerprints. In particular, we consider the bounded Post correspondence problem and applications of the problem to consensus fingerprints and topological localization. We propose an efficient approach to solve the bounded Post correspondence problem. In particular, we use an explicit reduction from the decision version of the problem to the satisfiability problem. We present the results of computational experiments for different satisfiability algorithms. In robotic experiments, we consider the average accuracy of reaching of the target point for different lengths of routes and types of fingerprints.
Feasel, Jeff; Wentz, Erin; Brooks, Frederick P.; Whitton, Mary C.
2012-01-01
Background and Purpose Persistent deficits in gait speed and spatiotemporal symmetry are prevalent following stroke and can limit the achievement of community mobility goals. Rehabilitation can improve gait speed, but has shown limited ability to improve spatiotemporal symmetry. The incorporation of combined visual and proprioceptive feedback regarding spatiotemporal symmetry has the potential to be effective at improving gait. Case Description A 60-year-old man (18 months poststroke) and a 53-year-old woman (21 months poststroke) each participated in gait training to improve gait speed and spatiotemporal symmetry. Each patient performed 18 sessions (6 weeks) of combined treadmill-based gait training followed by overground practice. To assist with relearning spatiotemporal symmetry, treadmill-based training for both patients was augmented with continuous, real-time visual and proprioceptive feedback from an immersive virtual environment and a dual belt treadmill, respectively. Outcomes Both patients improved gait speed (patient 1: 0.35 m/s improvement; patient 2: 0.26 m/s improvement) and spatiotemporal symmetry. Patient 1, who trained with step-length symmetry feedback, improved his step-length symmetry ratio, but not his stance-time symmetry ratio. Patient 2, who trained with stance-time symmetry feedback, improved her stance-time symmetry ratio. She had no step-length asymmetry before training. Discussion Both patients made improvements in gait speed and spatiotemporal symmetry that exceeded those reported in the literature. Further work is needed to ascertain the role of combined visual and proprioceptive feedback for improving gait speed and spatiotemporal symmetry after chronic stroke. PMID:22228605
Ahlfors, Seppo P.; Jones, Stephanie R.; Ahveninen, Jyrki; Hämäläinen, Matti S.; Belliveau, John W.; Bar, Moshe
2014-01-01
Identifying inter-area communication in terms of the hierarchical organization of functional brain areas is of considerable interest in human neuroimaging. Previous studies have suggested that the direction of magneto- and electroencephalography (MEG, EEG) source currents depends on the layer-specific input patterns into a cortical area. We examined the direction in MEG source currents in a visual object recognition experiment in which there were specific expectations of activation in the fusiform region being driven by either feedforward or feedback inputs. The source for the early non-specific visual evoked response, presumably corresponding to feedforward driven activity, pointed outward, i.e., away from the white matter. In contrast, the source for the later, object-recognition related signals, expected to be driven by feedback inputs, pointed inward, toward the white matter. Associating specific features of the MEG/EEG source waveforms to feedforward and feedback inputs could provide unique information about the activation patterns within hierarchically organized cortical areas. PMID:25445356
Independent voluntary correction and savings in locomotor learning.
Leech, Kristan A; Roemmich, Ryan T
2018-06-14
People can acquire new walking patterns in many different ways. For example, we can change our gait voluntarily in response to instruction or adapt by sensing our movement errors. Here we investigated how acquisition of a new walking pattern through simultaneous voluntary correction and adaptive learning affected the resulting motor memory of the learned pattern. We studied adaptation to split-belt treadmill walking with and without visual feedback of stepping patterns. As expected, visual feedback enabled faster acquisition of the new walking pattern. However, upon later re-exposure to the same split-belt perturbation, participants exhibited similar motor memories whether they had learned with or without visual feedback. Participants who received feedback did not re-engage the mechanism used to accelerate initial acquisition of the new walking pattern to similarly accelerate subsequent relearning. These findings reveal that voluntary correction neither benefits nor interferes with the ability to save a new walking pattern over time. © 2018. Published by The Company of Biologists Ltd.
Learning receptive fields using predictive feedback.
Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H
2006-01-01
Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.
Klink, P Christiaan; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Roelfsema, Pieter R
2017-07-05
The visual cortex is hierarchically organized, with low-level areas coding for simple features and higher areas for complex ones. Feedforward and feedback connections propagate information between areas in opposite directions, but their functional roles are only partially understood. We used electrical microstimulation to perturb the propagation of neuronal activity between areas V1 and V4 in monkeys performing a texture-segregation task. In both areas, microstimulation locally caused a brief phase of excitation, followed by inhibition. Both these effects propagated faithfully in the feedforward direction from V1 to V4. Stimulation of V4, however, caused little V1 excitation, but it did yield a delayed suppression during the late phase of visually driven activity. This suppression was pronounced for the V1 figure representation and weaker for background representations. Our results reveal functional differences between feedforward and feedback processing in texture segregation and suggest a specific modulating role for feedback connections in perceptual organization. Copyright © 2017 Elsevier Inc. All rights reserved.
Barsotti, Michele; Leonardis, Daniele; Vanello, Nicola; Bergamasco, Massimo; Frisoli, Antonio
2018-01-01
Feedback plays a crucial role for using brain computer interface systems. This paper proposes the use of vibration-evoked kinaesthetic illusions as part of a novel multisensory feedback for a motor imagery (MI)-based BCI and investigates its contributions in terms of BCI performance and electroencephalographic (EEG) correlates. sixteen subjects performed two different right arm MI-BCI sessions: with the visual feedback only and with both visual and vibration-evoked kinaesthetic feedback, conveyed by the stimulation of the biceps brachi tendon. In both conditions, the sensory feedback was driven by the MI-BCI. The rich and more natural multisensory feedback was expected to facilitate the execution of MI, and thus to improve the performance of the BCI. The EEG correlates of the proposed feedback were also investigated with and without the performing of MI. the contribution of vibration-evoked kinaesthetic feedback led to statistically higher BCI performance (Anova, F (1,14) = 18.1, p < .01) and more stable EEG event-related-desynchronization. Obtained results suggest promising application of the proposed method in neuro-rehabilitation scenarios: the advantage of an improved usability could make the MI-BCIs more applicable for those patients having difficulties in performing kinaesthetic imagery.
NASA Astrophysics Data System (ADS)
Hansen, Christian; Schlichting, Stefan; Zidowitz, Stephan; Köhn, Alexander; Hindennach, Milo; Kleemann, Markus; Peitgen, Heinz-Otto
2008-03-01
Tumor resections from the liver are complex surgical interventions. With recent planning software, risk analyses based on individual liver anatomy can be carried out preoperatively. However, additional tumors within the liver are frequently detected during oncological interventions using intraoperative ultrasound. These tumors are not visible in preoperative data and their existence may require changes to the resection strategy. We propose a novel method that allows an intraoperative risk analysis adaptation by merging newly detected tumors with a preoperative risk analysis. To determine the exact positions and sizes of these tumors we make use of a navigated ultrasound-system. A fast communication protocol enables our application to exchange crucial data with this navigation system during an intervention. A further motivation for our work is to improve the visual presentation of a moving ultrasound plane within a complex 3D planning model including vascular systems, tumors, and organ surfaces. In case the ultrasound plane is located inside the liver, occlusion of the ultrasound plane by the planning model is an inevitable problem for the applied visualization technique. Our system allows the surgeon to focus on the ultrasound image while perceiving context-relevant planning information. To improve orientation ability and distance perception, we include additional depth cues by applying new illustrative visualization algorithms. Preliminary evaluations confirm that in case of intraoperatively detected tumors a risk analysis adaptation is beneficial for precise liver surgery. Our new GPU-based visualization approach provides the surgeon with a simultaneous visualization of planning models and navigated 2D ultrasound data while minimizing occlusion problems.
Tse, Linda F L; Thanapalan, Kannan C; Chan, Chetwyn C H
2014-02-01
This study investigated the role of visual-perceptual input in writing Chinese characters among senior school-aged children who had handwriting difficulties (CHD). The participants were 27 CHD (9-11 years old) and 61 normally developed control. There were three writing conditions: copying, and dictations with or without visual feedback. The motor-free subtests of the Developmental Test of Visual Perception (DTVP-2) were conducted. The CHD group showed significantly slower mean speeds of character production and less legibility of produced characters than the control group in all writing conditions (ps<0.001). There were significant deteriorations in legibility from copying to dictation without visual feedback. Nevertheless, the Group by Condition interaction effect was not statistically significant. Only position in space of DTVP-2 was significantly correlated with the legibility among CHD (r=-0.62, p=0.001). Poor legibility seems to be related to the less-intact spatial representation of the characters in working memory, which can be rectified by viewing the characters during writing. Visual feedback regarding one's own actions in writing can also improve legibility of characters among these children. Copyright © 2013 Elsevier Ltd. All rights reserved.
Perceptual learning increases the strength of the earliest signals in visual cortex.
Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A
2010-11-10
Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.
Visual Requirements for Human Drivers and Autonomous Vehicles
DOT National Transportation Integrated Search
2016-03-01
Identification of published literature between 1995 and 2013, focusing on determining the quantity and quality of visual information needed under both driving modes (i.e., human and autonomous) to navigate the road safely, especially as it pertains t...
UGV navigation in wireless sensor and actuator network environments
NASA Astrophysics Data System (ADS)
Zhang, Guyu; Li, Jianfeng; Duncan, Christian A.; Kanno, Jinko; Selmic, Rastko R.
2012-06-01
We consider a navigation problem in a distributed, self-organized and coordinate-free Wireless Sensor and Ac- tuator Network (WSAN). We rst present navigation algorithms that are veried using simulation results. Con- sidering more than one destination and multiple mobile Unmanned Ground Vehicles (UGVs), we introduce a distributed solution to the Multi-UGV, Multi-Destination navigation problem. The objective of the solution to this problem is to eciently allocate UGVs to dierent destinations and carry out navigation in the network en- vironment that minimizes total travel distance. The main contribution of this paper is to develop a solution that does not attempt to localize either the UGVs or the sensor and actuator nodes. Other than some connectivity as- sumptions about the communication graph, we consider that no prior information about the WSAN is available. The solution presented here is distributed, and the UGV navigation is solely based on feedback from neigh- boring sensor and actuator nodes. One special case discussed in the paper, the Single-UGV, Multi-Destination navigation problem, is essentially equivalent to the well-known and dicult Traveling Salesman Problem (TSP). Simulation results are presented that illustrate the navigation distance traveled through the network. We also introduce an experimental testbed for the realization of coordinate-free and localization-free UGV navigation. We use the Cricket platform as the sensor and actuator network and a Pioneer 3-DX robot as the UGV. The experiments illustrate the UGV navigation in a coordinate-free WSAN environment where the UGV successfully arrives at the assigned destinations.
NASA Technical Reports Server (NTRS)
Brockers, Roland; Susca, Sara; Zhu, David; Matthies, Larry
2012-01-01
Direct-lift micro air vehicles have important applications in reconnaissance. In order to conduct persistent surveillance in urban environments, it is essential that these systems can perform autonomous landing maneuvers on elevated surfaces that provide high vantage points without the help of any external sensor and with a fully contained on-board software solution. In this paper, we present a micro air vehicle that uses vision feedback from a single down looking camera to navigate autonomously and detect an elevated landing platform as a surrogate for a roof top. Our method requires no special preparation (labels or markers) of the landing location. Rather, leveraging the planar character of urban structure, the landing platform detection system uses a planar homography decomposition to detect landing targets and produce approach waypoints for autonomous landing. The vehicle control algorithm uses a Kalman filter based approach for pose estimation to fuse visual SLAM (PTAM) position estimates with IMU data to correct for high latency SLAM inputs and to increase the position estimate update rate in order to improve control stability. Scale recovery is achieved using inputs from a sonar altimeter. In experimental runs, we demonstrate a real-time implementation running on-board a micro aerial vehicle that is fully self-contained and independent from any external sensor information. With this method, the vehicle is able to search autonomously for a landing location and perform precision landing maneuvers on the detected targets.
Sensor Architecture and Task Classification for Agricultural Vehicles and Environments
Rovira-Más, Francisco
2010-01-01
The long time wish of endowing agricultural vehicles with an increasing degree of autonomy is becoming a reality thanks to two crucial facts: the broad diffusion of global positioning satellite systems and the inexorable progress of computers and electronics. Agricultural vehicles are currently the only self-propelled ground machines commonly integrating commercial automatic navigation systems. Farm equipment manufacturers and satellite-based navigation system providers, in a joint effort, have pushed this technology to unprecedented heights; yet there are many unresolved issues and an unlimited potential still to uncover. The complexity inherent to intelligent vehicles is rooted in the selection and coordination of the optimum sensors, the computer reasoning techniques to process the acquired data, and the resulting control strategies for automatic actuators. The advantageous design of the network of onboard sensors is necessary for the future deployment of advanced agricultural vehicles. This article analyzes a variety of typical environments and situations encountered in agricultural fields, and proposes a sensor architecture especially adapted to cope with them. The strategy proposed groups sensors into four specific subsystems: global localization, feedback control and vehicle pose, non-visual monitoring, and local perception. The designed architecture responds to vital vehicle tasks classified within three layers devoted to safety, operative information, and automatic actuation. The success of this architecture, implemented and tested in various agricultural vehicles over the last decade, rests on its capacity to integrate redundancy and incorporate new technologies in a practical way. PMID:22163522
Sensor architecture and task classification for agricultural vehicles and environments.
Rovira-Más, Francisco
2010-01-01
The long time wish of endowing agricultural vehicles with an increasing degree of autonomy is becoming a reality thanks to two crucial facts: the broad diffusion of global positioning satellite systems and the inexorable progress of computers and electronics. Agricultural vehicles are currently the only self-propelled ground machines commonly integrating commercial automatic navigation systems. Farm equipment manufacturers and satellite-based navigation system providers, in a joint effort, have pushed this technology to unprecedented heights; yet there are many unresolved issues and an unlimited potential still to uncover. The complexity inherent to intelligent vehicles is rooted in the selection and coordination of the optimum sensors, the computer reasoning techniques to process the acquired data, and the resulting control strategies for automatic actuators. The advantageous design of the network of onboard sensors is necessary for the future deployment of advanced agricultural vehicles. This article analyzes a variety of typical environments and situations encountered in agricultural fields, and proposes a sensor architecture especially adapted to cope with them. The strategy proposed groups sensors into four specific subsystems: global localization, feedback control and vehicle pose, non-visual monitoring, and local perception. The designed architecture responds to vital vehicle tasks classified within three layers devoted to safety, operative information, and automatic actuation. The success of this architecture, implemented and tested in various agricultural vehicles over the last decade, rests on its capacity to integrate redundancy and incorporate new technologies in a practical way.
Keenan, Kevin G; Huddleston, Wendy E; Ernest, Bradley E
2017-11-01
The purpose of the study was to determine the visual strategies used by older adults during a pinch grip task and to assess the relations between visual strategy, deficits in attention, and increased force fluctuations in older adults. Eye movements of 23 older adults (>65 yr) were monitored during a low-force pinch grip task while subjects viewed three common visual feedback displays. Performance on the Grooved Pegboard test and an attention task (which required no concurrent hand movements) was also measured. Visual strategies varied across subjects and depended on the type of visual feedback provided to the subjects. First, while viewing a high-gain compensatory feedback display (horizontal bar moving up and down with force), 9 of 23 older subjects adopted a strategy of performing saccades during the task, which resulted in 2.5 times greater force fluctuations in those that exhibited saccades compared with those who maintained fixation near the target line. Second, during pursuit feedback displays (force trace moving left to right across screen and up and down with force), all subjects exhibited multiple saccades, and increased force fluctuations were associated ( r s = 0.6; P = 0.002) with fewer saccades during the pursuit task. Also, decreased low-frequency (<4 Hz) force fluctuations and Grooved Pegboard times were significantly related ( P = 0.033 and P = 0.005, respectively) with higher (i.e., better) attention z scores. Comparison of these results with our previously published results in young subjects indicates that saccadic eye movements and attention are related to force control in older adults. NEW & NOTEWORTHY The significant contributions of the study are the addition of eye movement data and an attention task to explain differences in hand motor control across different visual displays in older adults. Older participants used different visual strategies across varying feedback displays, and saccadic eye movements were related with motor performance. In addition, those older individuals with deficits in attention had impaired motor performance on two different hand motor control tasks, including the Grooved Pegboard test. Copyright © 2017 the American Physiological Society.
Promoting Increased Pitch Variation in Oral Presentations with Transient Visual Feedback
ERIC Educational Resources Information Center
Hincks, Rebecca; Edlund, Jens
2009-01-01
This paper investigates learner response to a novel kind of intonation feedback generated from speech analysis. Instead of displays of pitch curves, our feedback is flashing lights that show how much pitch variation the speaker has produced. The variable used to generate the feedback is the standard deviation of fundamental frequency as measured…
Vibrotactile grasping force and hand aperture feedback for myoelectric forearm prosthesis users.
Witteveen, Heidi J B; Rietman, Hans S; Veltink, Peter H
2015-06-01
User feedback about grasping force and hand aperture is very important in object handling with myoelectric forearm prostheses but is lacking in current prostheses. Vibrotactile feedback increases the performance of healthy subjects in virtual grasping tasks, but no extensive validation on potential users has been performed. Investigate the performance of upper-limb loss subjects in grasping tasks with vibrotactile stimulation, providing hand aperture, and grasping force feedback. Cross-over trial. A total of 10 subjects with upper-limb loss performed virtual grasping tasks while perceiving vibrotactile feedback. Hand aperture feedback was provided through an array of coin motors and grasping force feedback through a single miniature stimulator or an array of coin motors. Objects with varying sizes and weights had to be grasped by a virtual hand. Percentages correctly applied hand apertures and correct grasping force levels were all higher for the vibrotactile feedback condition compared to the no-feedback condition. With visual feedback, the results were always better compared to the vibrotactile feedback condition. Task durations were comparable for all feedback conditions. Vibrotactile grasping force and hand aperture feedback improves grasping performance of subjects with upper-limb loss. However, it should be investigated whether this is of additional value in daily-life tasks. This study is a first step toward the implementation of sensory vibrotactile feedback for users of myoelectric forearm prostheses. Grasping force feedback is crucial for optimal object handling, and hand aperture feedback is essential for reduction of required visual attention. Grasping performance with feedback is evaluated for the potential users. © The International Society for Prosthetics and Orthotics 2014.
NASA Technical Reports Server (NTRS)
Kirkpatrick, M.; Brye, R. G.
1974-01-01
A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.
Takeda, Kenta; Mani, Hiroki; Hasegawa, Naoya; Sato, Yuki; Tanaka, Shintaro; Maejima, Hiroshi; Asaka, Tadayoshi
2017-07-19
The benefit of visual feedback of the center of pressure (COP) on quiet standing is still debatable. This study aimed to investigate the adaptation effects of visual feedback training using both the COP and center of gravity (COG) during quiet standing. Thirty-four healthy young adults were divided into three groups randomly (COP + COG, COP, and control groups). A force plate was used to calculate the coordinates of the COP in the anteroposterior (COP AP ) and mediolateral (COP ML ) directions. A motion analysis system was used to calculate the coordinates of the center of mass (COM) in both directions (COM AP and COM ML ). The coordinates of the COG in the AP direction (COG AP ) were obtained from the force plate signals. Augmented visual feedback was presented on a screen in the form of fluctuation circles in the vertical direction that moved upward as the COP AP and/or COG AP moved forward and vice versa. The COP + COG group received the real-time COP AP and COG AP feedback simultaneously, whereas the COP group received the real-time COP AP feedback only. The control group received no visual feedback. In the training session, the COP + COG group was required to maintain an even distance between the COP AP and COG AP and reduce the COG AP fluctuation, whereas the COP group was required to reduce the COP AP fluctuation while standing on a foam pad. In test sessions, participants were instructed to keep their standing posture as quiet as possible on the foam pad before (pre-session) and after (post-session) the training sessions. In the post-session, the velocity and root mean square of COM AP in the COP + COG group were lower than those in the control group. In addition, the absolute value of the sum of the COP - COM distances in the COP + COG group was lower than that in the COP group. Furthermore, positive correlations were found between the COM AP velocity and COP - COM parameters. The results suggest that the novel visual feedback training that incorporates the COP AP -COG AP interaction reduces postural sway better than the training using the COP AP alone during quiet standing. That is, even COP AP fluctuation around the COG AP would be effective in reducing the COM AP velocity.
A systematic review: the influence of real time feedback on wheelchair propulsion biomechanics.
Symonds, Andrew; Barbareschi, Giulia; Taylor, Stephen; Holloway, Catherine
2018-01-01
Clinical guidelines recommend that, in order to minimize upper limb injury risk, wheelchair users adopt a semi-circular pattern with a slow cadence and a large push arc. To examine whether real time feedback can be used to influence manual wheelchair propulsion biomechanics. Clinical trials and case series comparing the use of real time feedback against no feedback were included. A general review was performed and methodological quality assessed by two independent practitioners using the Downs and Black checklist. The review was completed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) guidelines. Six papers met the inclusion criteria. Selected studies involved 123 participants and analysed the effect of visual and, in one case, haptic feedback. Across the studies it was shown that participants were able to achieve significant changes in propulsion biomechanics, when provided with real time feedback. However, the effect of targeting a single propulsion variable might lead to unwanted alterations in other parameters. Methodological assessment identified weaknesses in external validity. Visual feedback could be used to consistently increase push arc and decrease push rate, and may be the best focus for feedback training. Further investigation is required to assess such intervention during outdoor propulsion. Implications for Rehabilitation Upper limb pain and injuries are common secondary disorders that negatively affect wheelchair users' physical activity and quality of life. Clinical guidelines suggest that manual wheelchair users should aim to propel with a semi-circular pattern with low a push rate and large push arc in the range in order to minimise upper limbs' loading. Real time visual and haptic feedback are effective tools for improving propulsion biomechanics in both complete novices and experienced manual wheelchair users.
Gregson, Rachael Kathleen; Cole, Tim James; Skellett, Sophie; Bagkeris, Emmanouil; Welsby, Denise; Peters, Mark John
2017-05-01
To determine the effect of visual feedback on rate of chest compressions, secondarily relating the forces used. Randomised crossover trial. Tertiary teaching hospital. Fifty trained hospital staff. A thin sensor-mat placed over the manikin's chest measured rate and force. Rescuers applied compressions to the same paediatric manikin for two sessions. During one session they received visual feedback comparing their real-time rate with published guidelines. Primary: compression rate. Secondary: compression and residual forces. Rate of chest compressions (compressions per minute (compressions per minute; cpm)) varied widely (mean (SD) 111 (13), range 89-168), with a fourfold difference in variation during session 1 between those receiving and not receiving feedback (108 (5) vs 120 (20)). The interaction of session by feedback order was highly significant, indicating that this difference in mean rate between sessions was 14 cpm less (95% CI -22 to -5, p=0.002) in those given feedback first compared with those given it second. Compression force (N) varied widely (mean (SD) 306 (94); range 142-769). Those receiving feedback second (as opposed to first) used significantly lower force (adjusted mean difference -80 (95% CI -128 to -32), p=0.002). Mean residual force (18 N, SD 12, range 0-49) was unaffected by the intervention. While visual feedback restricted excessive compression rates to within the prescribed range, applied force remained widely variable. The forces required may differ with growth, but such variation treating one manikin is alarming. Feedback technologies additionally measuring force (effort) could help to standardise and define effective treatments throughout childhood. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
LOD map--A visual interface for navigating multiresolution volume visualization.
Wang, Chaoli; Shen, Han-Wei
2006-01-01
In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.
Visual Homing in the Absence of Feature-Based Landmark Information
ERIC Educational Resources Information Center
Gillner, Sabine; Weiss, Anja M.; Mallot, Hanspeter A.
2008-01-01
Despite that fact that landmarks play a prominent role in human navigation, experimental evidence on how landmarks are selected and defined by human navigators remains elusive. Indeed, the concept of a "landmark" is itself not entirely clear. In everyday language, the term landmark refers to salient, distinguishable, and usually nameable objects,…
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Aircraft communication and navigation....S.-REGISTERED AIRCRAFT ENGAGED IN COMMON CARRIAGE General § 129.17 Aircraft communication and... accuracy required for ATC; (ii) One marker beacon receiver providing visual and aural signals; and (iii...
Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori
2017-01-01
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from ‘driver-lost’ scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results. PMID:28809803
Ravankar, Abhijeet; Ravankar, Ankit A; Kobayashi, Yukinori; Emaru, Takanori
2017-08-15
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.
Austin, Andrea L; Spalding, Carmen N; Landa, Katrina N; Myer, Brian R; Donald, Cure; Smith, Jason E; Platt, Gerald; King, Heather C
2017-10-27
In effort to improve chest compression quality among health care providers, numerous feedback devices have been developed. Few studies, however, have focused on the use of cardiopulmonary resuscitation feedback devices for infants and children. This study evaluated the quality of chest compressions with standard team-leader coaching, a metronome (MetroTimer by ONYX Apps), and visual feedback (SkillGuide Cardiopulmonary Feedback Device) during simulated infant cardiopulmonary resuscitation. Seventy voluntary health care providers who had recently completed Pediatric Advanced Life Support or Basic Life Support courses were randomized to perform simulated infant cardiopulmonary resuscitation into 1 of 3 groups: team-leader coaching alone (control), coaching plus metronome, or coaching plus SkillGuide for 2 minutes continuously. Rate, depth, and frequency of complete recoil during cardiopulmonary resuscitation were recorded by the Laerdal SimPad device for each participant. American Heart Association-approved compression techniques were randomized to either 2-finger or encircling thumbs. The metronome was associated with more ideal compression rate than visual feedback or coaching alone (104/min vs 112/min and 113/min; P = 0.003, 0.019). Visual feedback was associated with more ideal depth than auditory (41 mm vs 38.9; P = 0.03). There were no significant differences in complete recoil between groups. Secondary outcomes of compression technique revealed a difference of 1 mm. Subgroup analysis of male versus female showed no difference in mean number of compressions (221.76 vs 219.79; P = 0.72), mean compression depth (40.47 vs 39.25; P = 0.09), or rate of complete release (70.27% vs 64.96%; P = 0.54). In the adult literature, feedback devices often show an increase in quality of chest compressions. Although more studies are needed, this study did not demonstrate a clinically significant improvement in chest compressions with the addition of a metronome or visual feedback device, no clinically significant difference in Pediatric Advanced Life Support-approved compression technique, and no difference between compression quality between genders.
Prasad, M S Raghu; Manivannan, Muniyandi; Manoharan, Govindan; Chandramohan, S M
2016-01-01
Most of the commercially available virtual reality-based laparoscopic simulators do not effectively evaluate combined psychomotor and force-based laparoscopic skills. Consequently, the lack of training on these critical skills leads to intraoperative errors. To assess the effectiveness of the novel virtual reality-based simulator, this study analyzed the combined psychomotor (i.e., motion or movement) and force skills of residents and expert surgeons. The study also examined the effectiveness of real-time visual force feedback and tool motion during training. Bimanual fundamental (i.e., probing, pulling, sweeping, grasping, and twisting) and complex tasks (i.e., tissue dissection) were evaluated. In both tasks, visual feedback on applied force and tool motion were provided. The skills of the participants while performing the early tasks were assessed with and without visual feedback. Participants performed 5 repetitions of fundamental and complex tasks. Reaction force and instrument acceleration were used as metrics. Surgical Gastroenterology, Government Stanley Medical College and Hospital; Institute of Surgical Gastroenterology, Madras Medical College and Rajiv Gandhi Government General Hospital. Residents (N = 25; postgraduates and surgeons with <2 years of laparoscopic surgery) and expert surgeons (N = 25; surgeons with >4 and ≤10 years of laparoscopic surgery). Residents applied large forces compared with expert surgeons and performed abrupt tool movements (p < 0.001). However, visual + haptic feedback improved the performance of residents (p < 0.001). In complex tasks, visual + haptic feedback did not influence the applied force of expert surgeons, but influenced their tool motion (p < 0.001). Furthermore, in complex tissue sweeping task, expert surgeons applied more force, but were within the tissue damage limits. In both groups, exertion of large forces and abrupt tool motion were observed during grasping, probing or pulling, and tissue sweeping maneuvers (p < 0.001). Modern day curriculum-based training should evaluate the skills of residents with robust force and psychomotor-based exercises for proficient laparoscopy. Visual feedback on force and motion during training has the potential to enhance the learning curve of residents. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Effects of visual feedback-induced variability on motor learning of handrim wheelchair propulsion.
Leving, Marika T; Vegter, Riemer J K; Hartog, Johanneke; Lamoth, Claudine J C; de Groot, Sonja; van der Woude, Lucas H V
2015-01-01
It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear simultaneously during the motor learning process. Their relationship is most likely modified by other factors such as the amount of the intra-individual variability.
Effects of Visual Feedback-Induced Variability on Motor Learning of Handrim Wheelchair Propulsion
Leving, Marika T.; Vegter, Riemer J. K.; Hartog, Johanneke; Lamoth, Claudine J. C.; de Groot, Sonja; van der Woude, Lucas H. V.
2015-01-01
Background It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. Methods 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. Results The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. Conclusion These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear simultaneously during the motor learning process. Their relationship is most likely modified by other factors such as the amount of the intra-individual variability. PMID:25992626
Combined contributions of feedforward and feedback inputs to bottom-up attention
Khorsand, Peyman; Moore, Tirin; Soltani, Alireza
2015-01-01
In order to deal with a large amount of information carried by visual inputs entering the brain at any given point in time, the brain swiftly uses the same inputs to enhance processing in one part of visual field at the expense of the others. These processes, collectively called bottom-up attentional selection, are assumed to solely rely on feedforward processing of the external inputs, as it is implied by the nomenclature. Nevertheless, evidence from recent experimental and modeling studies points to the role of feedback in bottom-up attention. Here, we review behavioral and neural evidence that feedback inputs are important for the formation of signals that could guide attentional selection based on exogenous inputs. Moreover, we review results from a modeling study elucidating mechanisms underlying the emergence of these signals in successive layers of neural populations and how they depend on feedback from higher visual areas. We use these results to interpret and discuss more recent findings that can further unravel feedforward and feedback neural mechanisms underlying bottom-up attention. We argue that while it is descriptively useful to separate feedforward and feedback processes underlying bottom-up attention, these processes cannot be mechanistically separated into two successive stages as they occur at almost the same time and affect neural activity within the same brain areas using similar neural mechanisms. Therefore, understanding the interaction and integration of feedforward and feedback inputs is crucial for better understanding of bottom-up attention. PMID:25784883
Hernandez, Rafael; Onar-Thomas, Arzu; Travascio, Francesco; Asfour, Shihab
2017-11-01
Laparoscopic training with visual force feedback can lead to immediate improvements in force moderation. However, the long-term retention of this kind of learning and its potential decay are yet unclear. A laparoscopic resection task and force sensing apparatus were designed to assess the benefits of visual force feedback training. Twenty-two male university students with no previous experience in laparoscopy underwent relevant FLS proficiency training. Participants were randomly assigned to either a control or treatment group. Both groups trained on the task for 2 weeks as follows: initial baseline, sixteen training trials, and post-test immediately after. The treatment group had visual force feedback during training, whereas the control group did not. Participants then performed four weekly test trials to assess long-term retention of training. Outcomes recorded were maximum pulling and pushing forces, completion time, and rated task difficulty. Extreme maximum pulling force values were tapered throughout both the training and retention periods. Average maximum pushing forces were significantly lowered towards the end of training and during retention period. No significant decay of applied force learning was found during the 4-week retention period. Completion time and rated task difficulty were higher during training, but results indicate that the difference eventually fades during the retention period. Significant differences in aptitude across participants were found. Visual force feedback training improves on certain aspects of force moderation in a laparoscopic resection task. Results suggest that with enough training there is no significant decay of learning within the first month of the retention period. It is essential to account for differences in aptitude between individuals in this type of longitudinal research. This study shows how an inexpensive force measuring system can be used with an FLS Trainer System after some retrofitting. Surgical instructors can develop their own tasks and adjust force feedback levels accordingly.
Visual Feedback of Tongue Movement for Novel Speech Sound Learning
Katz, William F.; Mehta, Sonya
2015-01-01
Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one's own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker's learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers' productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing. PMID:26635571
King, Adam C; Newell, Karl M
2015-10-01
The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-01-01
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas. DOI: http://dx.doi.org/10.7554/eLife.15252.001 PMID:27596931
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-09-06
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas.
Underwater terrain-aided navigation system based on combination matching algorithm.
Li, Peijuan; Sheng, Guoliang; Zhang, Xiaofei; Wu, Jingqiu; Xu, Baochun; Liu, Xing; Zhang, Yao
2018-07-01
Considering that the terrain-aided navigation (TAN) system based on iterated closest contour point (ICCP) algorithm diverges easily when the indicative track of strapdown inertial navigation system (SINS) is large, Kalman filter is adopted in the traditional ICCP algorithm, difference between matching result and SINS output is used as the measurement of Kalman filter, then the cumulative error of the SINS is corrected in time by filter feedback correction, and the indicative track used in ICCP is improved. The mathematic model of the autonomous underwater vehicle (AUV) integrated into the navigation system and the observation model of TAN is built. Proper matching point number is designated by comparing the simulation results of matching time and matching precision. Simulation experiments are carried out according to the ICCP algorithm and the mathematic model. It can be concluded from the simulation experiments that the navigation accuracy and stability are improved with the proposed combinational algorithm in case that proper matching point number is engaged. It will be shown that the integrated navigation system is effective in prohibiting the divergence of the indicative track and can meet the requirements of underwater, long-term and high precision of the navigation system for autonomous underwater vehicles. Copyright © 2017. Published by Elsevier Ltd.
Valerio, Stephane; Clark, Benjamin J.; Chan, Jeremy H. M.; Frost, Carlton P.; Harris, Mark J.; Taube, Jeffrey S.
2010-01-01
Previous studies have identified neurons throughout the rat limbic system that fire as a function of the animal's head direction (HD). This HD signal is particularly robust when rats locomote in the horizontal and vertical planes, but is severely attenuated when locomoting upside-down (Calton & Taube, 2005). Given the hypothesis that the HD signal represents an animal's sense of its directional heading, we evaluated whether rats could accurately navigate in an inverted (upside-down) orientation. The task required the animals to find an escape hole while locomoting inverted on a circular platform suspended from the ceiling. In experiment 1, Long-Evans rats were trained to navigate to the escape hole by locomoting from either one or four start points. Interestingly, no animals from the 4-start point group reached criterion, even after 30 days of training. Animals in the 1-start point group reached criterion after about 6 training sessions. In Experiment 2, probe tests revealed that animals navigating from either 1- or 2-start points utilized distal visual landmarks for accurate orientation. However, subsequent probe tests revealed that their performance was markedly attenuated when required to navigate to the escape hole from a novel starting point. This absence of flexibility while navigating upside-down was confirmed in experiment 3 where we show that the rats do not learn to reach a place, but instead learn separate trajectories to the target hole(s). Based on these results we argue that inverted navigation primarily involves a simple directional strategy based on visual landmarks. PMID:20109566
Towers, John; Burgess-Limerick, Robin; Riek, Stephan
2014-12-01
The aim of this study was to enable the head-up monitoring of two interrelated aircraft navigation instruments by developing a 3-D auditory display that encodes this navigation information within two spatially discrete sonifications. Head-up monitoring of aircraft navigation information utilizing 3-D audio displays, particularly involving concurrently presented sonifications, requires additional research. A flight simulator's head-down waypoint bearing and course deviation instrument readouts were conveyed to participants via a 3-D auditory display. Both readouts were separately represented by a colocated pair of continuous sounds, one fixed and the other varying in pitch, which together encoded the instrument value's deviation from the norm. Each sound pair's position in the listening space indicated the left/right parameter of its instrument's readout. Participants' accuracy in navigating a predetermined flight plan was evaluated while performing a head-up task involving the detection of visual flares in the out-of-cockpit scene. The auditory display significantly improved aircraft heading and course deviation accuracy, head-up time, and flare detections. Head tracking did not improve performance by providing participants with the ability to orient potentially conflicting sounds, suggesting that the use of integrated localizing cues was successful. Conclusion: A supplementary 3-D auditory display enabled effective head-up monitoring of interrelated navigation information normally attended to through a head-down display. Pilots operating aircraft, such as helicopters and unmanned aerial vehicles, may benefit from a supplementary auditory display because they navigate in two dimensions while performing head-up, out-of-aircraft, visual tasks.
Hamker, Fred H; Wiltschut, Jan
2007-09-01
Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.
Reading Deeply for Disciplinary Awareness and Political Judgment
ERIC Educational Resources Information Center
Staudinger, Alison
2017-01-01
What happens when students become better readers? Cultivating deep reading habits in students to help them navigate disciplinary cultures respects student autonomy. Scholarly literature predicts that three linked practices improve student reading: practice with feedback, explicit in-class work on reading strategies, and disciplinary norm…
Entorhinal-CA3 Dual-Input Control of Spike Timing in the Hippocampus by Theta-Gamma Coupling.
Fernández-Ruiz, Antonio; Oliva, Azahara; Nagy, Gergő A; Maurer, Andrew P; Berényi, Antal; Buzsáki, György
2017-03-08
Theta-gamma phase coupling and spike timing within theta oscillations are prominent features of the hippocampus and are often related to navigation and memory. However, the mechanisms that give rise to these relationships are not well understood. Using high spatial resolution electrophysiology, we investigated the influence of CA3 and entorhinal inputs on the timing of CA1 neurons. The theta-phase preference and excitatory strength of the afferent CA3 and entorhinal inputs effectively timed the principal neuron activity, as well as regulated distinct CA1 interneuron populations in multiple tasks and behavioral states. Feedback potentiation of distal dendritic inhibition by CA1 place cells attenuated the excitatory entorhinal input at place field entry, coupled with feedback depression of proximal dendritic and perisomatic inhibition, allowing the CA3 input to gain control toward the exit. Thus, upstream inputs interact with local mechanisms to determine theta-phase timing of hippocampal neurons to support memory and spatial navigation. Copyright © 2017 Elsevier Inc. All rights reserved.
Keshavan, J; Gremillion, G; Escobar-Alvarez, H; Humbert, J S
2014-06-01
Safe, autonomous navigation by aerial microsystems in less-structured environments is a difficult challenge to overcome with current technology. This paper presents a novel visual-navigation approach that combines bioinspired wide-field processing of optic flow information with control-theoretic tools for synthesis of closed loop systems, resulting in robustness and performance guarantees. Structured singular value analysis is used to synthesize a dynamic controller that provides good tracking performance in uncertain environments without resorting to explicit pose estimation or extraction of a detailed environmental depth map. Experimental results with a quadrotor demonstrate the vehicle's robust obstacle-avoidance behaviour in a straight line corridor, an S-shaped corridor and a corridor with obstacles distributed in the vehicle's path. The computational efficiency and simplicity of the current approach offers a promising alternative to satisfying the payload, power and bandwidth constraints imposed by aerial microsystems.
Choi, Hyunseok; Cho, Byunghyun; Masamune, Ken; Hashizume, Makoto; Hong, Jaesung
2016-03-01
Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks. Copyright © 2015 John Wiley & Sons, Ltd.
Altered figure-ground perception in monkeys with an extra-striate lesion.
Supèr, Hans; Lamme, Victor A F
2007-11-05
The visual system binds and segments the elements of an image into coherent objects and their surroundings. Recent findings demonstrate that primary visual cortex is involved in this process of figure-ground organization. In the primary visual cortex the late part of a neural response to a stimulus correlates with figure-ground segregation and perception. Such a late onset indicates an involvement of feedback projections from higher visual areas. To investigate the possible role of feedback in figure-ground perception we removed dorsal extra-striate areas of the monkey visual cortex. The findings show that figure-ground perception is reduced when the figure is presented in the lesioned hemifield and perception is normal when the figure appeared in the intact hemifield. In conclusion, our observations show the importance for recurrent processing in visual perception.
Huang, Chien-Ting; Hwang, Ing-Shiou
2012-01-01
Visual feedback and non-visual information play different roles in tracking of an external target. This study explored the respective roles of the visual and non-visual information in eleven healthy volunteers who coupled the manual cursor to a rhythmically moving target of 0.5 Hz under three sensorimotor conditions: eye-alone tracking (EA), eye-hand tracking with visual feedback of manual outputs (EH tracking), and the same tracking without such feedback (EHM tracking). Tracking error, kinematic variables, and movement intermittency (saccade and speed pulse) were contrasted among tracking conditions. The results showed that EHM tracking exhibited larger pursuit gain, less tracking error, and less movement intermittency for the ocular plant than EA tracking. With the vision of manual cursor, EH tracking achieved superior tracking congruency of the ocular and manual effectors with smaller movement intermittency than EHM tracking, except that the rate precision of manual action was similar for both types of tracking. The present study demonstrated that visibility of manual consequences altered mutual relationships between movement intermittency and tracking error. The speed pulse metrics of manual output were linked to ocular tracking error, and saccade events were time-locked to the positional error of manual tracking during EH tracking. In conclusion, peripheral non-visual information is critical to smooth pursuit characteristics and rate control of rhythmic manual tracking. Visual information adds to eye-hand synchrony, underlying improved amplitude control and elaborate error interpretation during oculo-manual tracking. PMID:23236498
Performance drifts in two-finger cyclical force production tasks performed by one and two actors.
Hasanbarani, Fariba; Reschechtko, Sasha; Latash, Mark L
2018-03-01
We explored changes in the cyclical two-finger force performance task caused by turning visual feedback off performed either by the index and middle fingers of the dominant hand or by two index fingers of two persons. Based on an earlier study, we expected drifts in finger force amplitude and midpoint without a drift in relative phase. The subjects performed two rhythmical tasks at 1 Hz while paced by an auditory metronome. One of the tasks required cyclical changes in total force magnitude without changes in the sharing of the force between the two fingers. The other task required cyclical changes in the force sharing without changing total force magnitude. Subjects were provided with visual feedback, which showed total force magnitude and force sharing via cursor motion along the vertical and horizontal axes, respectively. Further, visual feedback was turned off, first on the variable that was not required to change and then on both variables. Turning visual feedback off led to a mean force drift toward lower magnitudes while force amplitude increased. There was a consistent drift in the relative phase in the one-hand task with the index finger leading the middle finger. No consistent relative phase drift was seen in the two-person tasks. The shape of the force cycle changed without visual feedback reflected in the lower similarity to a perfect cosine shape and in the higher time spent at lower force magnitudes. The data confirm findings of earlier studies regarding force amplitude and midpoint changes, but falsify predictions of an earlier proposed model with respect to the relative phase changes. We discuss factors that could contribute to the observed relative phase drift in the one-hand tasks including the leader-follower pattern generalized for two-effector tasks performed by one person.
Cheng, Adam; Lin, Yiqun; Nadkarni, Vinay; Wan, Brandi; Duff, Jonathan; Brown, Linda; Bhanji, Farhan; Kessler, David; Tofil, Nancy; Hecker, Kent; Hunt, Elizabeth A
2018-01-01
We aimed to explore whether a) step stool use is associated with improved cardiopulmonary resuscitation (CPR) quality; b) provider adjusted height is associated with improved CPR quality; and if associations exist, c) determine whether just-in-time (JIT) CPR training and/or CPR visual feedback attenuates the effect of height and/or step stool use on CPR quality. We analysed data from a trial of simulated cardiac arrests with three study arms: No intervention; CPR visual feedback; and JIT CPR training. Step stool use was voluntary. We explored the association between 1) step stool use and CPR quality, and 2) provider adjusted height and CPR quality. Adjusted height was defined as provider height + 23 cm (if step stool was used). Below-average height participants were ≤ gender-specific average height; the remainder were above average height. We assessed for interaction between study arm and both adjusted height and step stool use. One hundred twenty-four subjects participated; 1,230 30-second epochs of CPR were analysed. Step stool use was associated with improved compression depth in below-average (female, p=0.007; male, p<0.001) and above-average (female, p=0.001; male, p<0.001) height providers. There is an association between adjusted height and compression depth (p<0.001). Visual feedback attenuated the effect of height (p=0.025) on compression depth; JIT training did not (p=0.918). Visual feedback and JIT training attenuated the effect of step stool use (p<0.001) on compression depth. Step stool use is associated with improved compression depth regardless of height. Increased provider height is associated with improved compression depth, with visual feedback attenuating the effects of height and step stool use.
Solnik, Stanislaw; Qiao, Mu; Latash, Mark L.
2017-01-01
This study tested two hypotheses on the nature of unintentional force drifts elicited by removing visual feedback during accurate force production tasks. The role of working memory (memory hypothesis) was explored in tasks with continuous force production, intermittent force production, and rest intervals over the same time interval. The assumption of unintentional drifts in referent coordinate for the fingertips was tested using manipulations of visual feedback: Young healthy subjects performed accurate steady-state force production tasks by pressing with the two index fingers on individual force sensors with visual feedback on the total force, sharing ratio, both, or none. Predictions based on the memory hypothesis have been falsified. In particular, we observed consistent force drifts to lower force values during continuous force production trials only. No force drift or drifts to higher forces were observed during intermittent force production trials and following rest intervals. The hypotheses based on the idea of drifts in referent finger coordinates have been confirmed. In particular, we observed superposition of two drift processes: A drift of total force to lower magnitudes and a drift of the sharing ratio to 50:50. When visual feedback on total force only was provided, the two finger forces showed drifts in opposite directions. We interpret the findings as evidence for the control of motor actions with changes in referent coordinates for participating effectors. Unintentional drifts in performance are viewed as natural relaxation processes in the involved systems; their typical time reflects stability in the direction of the drift. The magnitude of the drift was higher in the right (dominant) hand, which is consistent with the dynamic dominance hypothesis. PMID:28168396
Synergies in Astrometry: Predicting Navigational Error of Visual Binary Stars
NASA Astrophysics Data System (ADS)
Gessner Stewart, Susan
2015-08-01
Celestial navigation can employ a number of bright stars which are in binary systems. Often these are unresolved, appearing as a single, center-of-light object. A number of these systems are, however, in wide systems which could introduce a margin of error in the navigation solution if not handled properly. To illustrate the importance of good orbital solutions for binary systems - as well as good astrometry in general - the relationship between the center-of-light versus individual catalog position of celestial bodies and the error in terrestrial position derived via celestial navigation is demonstrated. From the list of navigational binary stars, fourteen such binary systems with at least 3.0 arcseconds apparent separation are explored. Maximum navigational error is estimated under the assumption that the bright star in the pair is observed at maximum separation, but the center-of-light is employed in the navigational solution. The relationships between navigational error and separation, orbital periods, and observers' latitude are discussed.
Autonomous Deep-Space Optical Navigation Project
NASA Technical Reports Server (NTRS)
D'Souza, Christopher
2014-01-01
This project will advance the Autonomous Deep-space navigation capability applied to Autonomous Rendezvous and Docking (AR&D) Guidance, Navigation and Control (GNC) system by testing it on hardware, particularly in a flight processor, with a goal of limited testing in the Integrated Power, Avionics and Software (IPAS) with the ARCM (Asteroid Retrieval Crewed Mission) DRO (Distant Retrograde Orbit) Autonomous Rendezvous and Docking (AR&D) scenario. The technology, which will be harnessed, is called 'optical flow', also known as 'visual odometry'. It is being matured in the automotive and SLAM (Simultaneous Localization and Mapping) applications but has yet to be applied to spacecraft navigation. In light of the tremendous potential of this technique, we believe that NASA needs to design a optical navigation architecture that will use this technique. It is flexible enough to be applicable to navigating around planetary bodies, such as asteroids.
Field evaluation of a wearable multimodal soldier navigation system.
Aaltonen, Iina; Laarni, Jari
2017-09-01
Challenging environments pose difficulties for terrain navigation, and therefore wearable and multimodal navigation systems have been proposed to overcome these difficulties. Few such navigation systems, however, have been evaluated in field conditions. We evaluated how a multimodal system can aid in navigating in a forest in the context of a military exercise. The system included a head-mounted display, headphones, and a tactile vibrating vest. Visual, auditory, and tactile modalities were tested and evaluated using unimodal, bimodal, and trimodal conditions. Questionnaires, interviews and observations were used to evaluate the advantages and disadvantages of each modality and their multimodal use. The guidance was considered easy to interpret and helpful in navigation. Simplicity of the displayed information was required, which was partially conflicting with the request for having both distance and directional information available. Copyright © 2017 Elsevier Ltd. All rights reserved.
Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.
Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J
2016-10-24
In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.
Effect of vibrotactile feedback on an EMG-based proportional cursor control system.
Li, Shunchong; Chen, Xingyu; Zhang, Dingguo; Sheng, Xinjun; Zhu, Xiangyang
2013-01-01
Surface electromyography (sEMG) has been introduced into the bio-mechatronics systems, however, most of them are lack of the sensory feedback. In this paper, the effect of vibrotactile feedback for a myoelectric cursor control system is investigated quantitatively. Simultaneous and proportional control signals are extracted from EMG using a muscle synergy model. Different types of feedback including vibrotactile feedback and visual feedback are added, assessed and compared with each other. The results show that vibrotactile feedback is capable of improving the performance of EMG-based human machine interface.
Simonsen, Daniel; Popovic, Mirjana B; Spaich, Erika G; Andersen, Ole Kæseler
2017-11-01
The present paper describes the design and test of a low-cost Microsoft Kinect-based system for delivering adaptive visual feedback to stroke patients during the execution of an upper limb exercise. Eleven sub-acute stroke patients with varying degrees of upper limb function were recruited. Each subject participated in a control session (repeated twice) and a feedback session (repeated twice). In each session, the subjects were presented with a rectangular pattern displayed on a vertical mounted monitor embedded in the table in front of the patient. The subjects were asked to move a marker inside the rectangular pattern by using their most affected hand. During the feedback session, the thickness of the rectangular pattern was changed according to the performance of the subject, and the color of the marker changed according to its position, thereby guiding the subject's movements. In the control session, the thickness of the rectangular pattern and the color of the marker did not change. The results showed that the movement similarity and smoothness was higher in the feedback session than in the control session while the duration of the movement was longer. The present study showed that adaptive visual feedback delivered by use of the Kinect sensor can increase the similarity and smoothness of upper limb movement in stroke patients.